• Nie Znaleziono Wyników

Sensitivity and Uncertainty Analysis of Coupled Reactor Physics Problems: Method Development for Multi-Physics in Reactors

N/A
N/A
Protected

Academic year: 2021

Share "Sensitivity and Uncertainty Analysis of Coupled Reactor Physics Problems: Method Development for Multi-Physics in Reactors"

Copied!
219
0
0

Pełen tekst

(1)

Sensitivity and Uncertainty Analysis

of Coupled Reactor Physics Problems

Method Development for Multi-Physics in Reactors

Zoltán Perkó

Sensitivity and Unce

rtainty Analysis of Coupled

Reactor Physics Problems

Zoltán

Perkó

R0 Rlimit σR µR A djo in t p roblem Forward pro blem Integration leve ls Po lyn om ial Chaos Cubature poi nts Ga us sia n di st ribut ion Covariance matrix

Un

ce

rta

in i

npu

ts

S&

U

tec

hn

iqu

e

Unc

ert

ain

o

ut

pu

t

(2)

Propositions

Belonging to the PhD thesis of Zoltán Perkó

Sensitivity and Uncertainty Analysis of Coupled Reactor Physics Problems – Method Development for Multi-Physics in Reactors

1. The advantages of adjoint sensitivity analysis surpass its obvious shortcomings even for nonlinear problems. Therefore adjoint methods should be implemented for nonlinear coupled problems as well. (Chapters 2 and 3 of this thesis)

2. Adaptive Polynomial Chaos methods are computationally cheaper and more accurate than sampling based methods even for problems with a relatively high (≈50) number of input parameters. (Chapters 4 and 5 of this thesis)

3. Research on deterministic sensitivity and uncertainty analysis methods is most beneficial for interdisciplinary applications. (C. M. H. Hartman, “Sensitivity and Uncertainty Analysis in Proton Therapy Treatment Plans Using Polynomial Chaos Methods”, MSc. thesis, 2014)

4. In practice stochastic algorithms will always be favoured over deterministic ones for sensitivity and uncertainty analysis due to their ease of use.

5. Method and user related uncertainties are generally larger than data related uncertainties in computational sciences and should therefore get much more focus. (S. N. Aksan, F. D’Auria, H. Städtke, “User effects on the thermal-hydraulic transient system code calculations”, Nuclear Engineering and Design, 145, pp. 159-174, 1993)

6. Electricity generation by newly built nuclear power plants will only become economically competitive with traditional (fossil-fuelled) power plants if the latter are also obliged to cover the full costs of their waste management. (C. Mari, “The costs of generating electricity and the competitiveness of nuclear power”, Progress in Nuclear Energy, 73, pp. 153-161, 2014)

7. In countries with low levels of foreign language proficiency the cheapest and most practical method for increasing it is subtitling television programs instead of using dubbing or voice-over. (C. M. Koolstra, J. W. J. Beentjes, “Children’s Vocabulary Acquisition in a Foreign Language Through Watching Subtitled Television Programs at Home”, Education Technology Research and Development, 47, pp. 51-60, 1999)

8. Renewable energy sources alone cannot solve the problem of climate change. (Google “RE<C” project

(https://www.google.org/rec.html))

9. The scientifically most sound resolution of the Fermi paradox1 is that the contact cross section between

intelligent spacefaring civilizations is far less than what could be expected based on spatial-temporal arguments. (G.D. Brin, “The ‘Great Silence’: the Controversy Concerning Extraterrestrial Intelligent Life”, Quarterly Journal of the Royal Astronomical Society, 24 (3), pp. 283-309, 1983)

10. Focusing on proliferation concerns in nuclear fuel cycle policies is counterproductive, since it prevents the

nuclear community from applying already existing technologies (e.g. PUREX) and better directing research to safeguard nuclear materials. (J. Mueller, “Simplicity and Spook: Terrorism and the Dynamics of Threat Exaggeration”, International Studies Perspectives, 6, pp. 208-234, 2005)

1

The Fermi paradox refers to the apparent contradiction between the seemingly high estimates for the probability of the existence of extra-terrestrial intelligent life forms and our lack of evidence for it.

(3)

Deze stellingen worden opponeerbaar en verdedigbaar geacht en zijn als zodanig goedgekeurd door

Stellingen

Behorende bij het proefschrift van Zoltán Perkó

Sensitivity and Uncertainty Analysis of Coupled Reactor Physics Problems – Method Development for Multi-Physics in Reactors

1. De voordelen van geadjugeerde gevoeligheidsanalyse wegen zelfs voor non-lineaire problemen op tegen de tekortkomingen. Daarom dienen ook voor niet-lineaire gekoppelde problemen geadjugeerde methoden geïmplementeerd te worden. (Hoofdstukken 2 en 3 van dit proefschrift)

2. Adaptieve Polynoom-Chaos methoden kosten minder rekentijd en zijn preciezer dan

signaalbemonsteringsmethoden, zelfs voor problemen met relatief veel (≈50) invoerparameters. (Hoofdstukken 4 en 5 van dit proefschrift)

3. Onderzoek naar deterministische gevoeligheids- en onzekerheidsanalyse methoden heeft vooral voordelen voor interdisciplinaire toepassingen. (C. M. H. Hartman, “Sensitivity and Uncertainty Analysis in Proton Therapy Treatment Plans Using Polynomial Chaos Methods”, MSc. thesis, 2014)

4. In de praktijk zullen stochastische algoritmes voor gevoeligheids- en onzekerheidsanalyse altijd verkozen worden boven deterministische omdat ze makkelijker zijn in gebruik.

5. Methode- en gebruikergerelateerde onzekerheden in de computerwetenschappen zijn in het algemeen groter dan datagerelateerde onzekerheden, en behoren daarom meer aandacht te krijgen. (S. N. Aksan, F. D’Auria, H. Städtke, “User effects on the thermal-hydraulic transient system code calculations”, Nuclear Engineering and Design, 145, pp. 159-174, 1993)

6. Nieuwe kerncentrales kunnen alleen concurreren met traditionele (fossiel-gestookte) centrales, als de laatst genoemde ook verplicht worden op te draaien voor de volledige kosten die horen bij het afvalmanagement. (C. Mari, “The costs of generating electricity and the competitiveness of nuclear power”, Progress in Nuclear Energy, 73, pp. 153-161, 2014)

7. In landen waar vreemde talen slecht beheerst worden, is het ondertitelen van televisieprogramma’s, in plaats van het nasynchroniseren daarvan, de goedkoopste en de meest praktische methode om het niveau van de beheersing van vreemde talen op te krikken. (C. M. Koolstra, J. W. J. Beentjes, “Children’s Vocabulary Acquisition in a Foreign Language Through Watching Subtitled Television Programs at Home”, Education Technology Research and Development, 47, pp. 51-60, 1999)

8. De inzet van alleen hernieuwbare energiebronnen kan het probleem van klimaatverandering niet oplossen. (https://www.google.org/rec.html))

9. De wetenschappelijk gezien meest logische oplossing van de Fermi paradox2 is dat de kans op contact

tussen intelligente ruimte-bevarende samenlevingen veel kleiner is dan wat verwacht mag worden op basis van ruimtelijke en temporele argumenten. (G.D. Brin, “The ‘Great Silence’: the Controversy Concerning Extraterrestrial Intelligent Life”, Quarterly Journal of the Royal Astronomical Society, 24 (3), pp. 283-309, 1983)

10. De aandacht voor proliferatiezorgen in nucleair brandstofcyclusbeleid is contraproductief, omdat het de

nucleaire gemeenschap belet om reeds bestaande technologieën (zoals PUREX) toe te passen en om onderzoek te verrichten ten behoeve van het veilig beheer van nucleair materiaal. (J. Mueller, “Simplicity and Spook: Terrorism and the Dynamics of Threat Exaggeration”, International Studies Perspectives, 6, pp. 208-234, 2005)

2

De Fermi paradox refereert aan de ogenschijnlijke tegenstelling tussen de schijnbare hoge schattingen van de kans dat buitenaards intelligent leven bestaat, ondanks dat daar geen bewijs voor is.

(4)

Sensitivity and Uncertainty Analysis

of Coupled Reactor Physics

Problems - Method Development

for Multi-Physics in Reactors

(5)
(6)

Sensitivity and Uncertainty Analysis

of Coupled Reactor Physics

Problems - Method Development

for Multi-Physics in Reactors

Proefschrift

ter verkrijging van de graad van doctor aan de Technische Universtiteit Delft,

op gezag van de Rector Magnificus Prof. ir. K.C.A.M. Luyben, voorzitter van het College voor Promoties,

in het openbaar te verdedigen op 21 januari 2015 om 12:30 uur

door

Zoltán PERKÓ

Master of Science in Engineering Physics, Budapesti Műszaki Egyetem

(7)

Dit proefschrift is goedgekeurd door de promotor: Prof. dr. ir. T.H.J.J. van der Hagen

Samenstelling promotiecommissie: Rector Magnificus,

Prof. dr. ir. T.H.J.J. van der Hagen, Dr. ir. J.L. Kloosterman,

Prof. dr. ir. drs. H. Bijl, Prof. dr. A. W. Heemink, Prof. C. Pain,

Prof. P. Ravetto, Dr. O.P. Le Maître,

voorzitter

Technische Universiteit Delft, promotor Technische Universiteit Delft, supervisor Technische Universiteit Delft

Technische Universiteit Delft

Imperial College London, Verenigd Koninkrijk Politecnico di Torino, Italiaanse Republiek LIMSI-CNRS, Orsay, Frankrijk.

Dr. Danny Lathouwers has provided invaluable guidance and support in the preparation of this thesis.

c

2014, Zoltán Perkó

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior permission from the copyright owner.

ISBN 978-94-6295-071-9

Keywords: Sensitivity analysis, uncertainty quantification, adjoint, Polynomial Chaos, coupled systems

The research described in this thesis was performed in the Physics of Nuclear Reactors (PNR)/Nuclear Energy and Radiation Applications (NERA) section of the department of Radiation, Radionuclides & Reactors (R3)/Radiation,

Science and Technology (RST) of the Delft University of Technology, Delft, The Netherlands.

Cover design: Proefschriftmaken.nl || Uitgeverij BOXPress Printed by: Proefschriftmaken.nl || Uitgeverij BOXPress Published by: Uitgeverij BOXPress, ’s-Hertogenbosch

(8)

To my family

Financial support

The research described in this thesis was partially funded by the GoFastR Project within the 7th Framework Program of the European Commission under contract No. 249678.

(9)

Contents

Contents

1 Introduction 1

1.1 The Role of Computational Physics . . . 1

1.2 Importance of Uncertainties in Simulations . . . 4

1.3 Traditional Methods for Sensitivity and Uncertainty Analysis 7 1.4 Contents and Overview of This Thesis . . . 11

2 Adjoint Techniques for Coupled Criticality Problems 15 2.1 Introduction . . . 15

2.2 Theory . . . 17

2.3 Solution of the Coupled Adjoint Problem . . . 25

2.4 Application to a One-Dimensional Slab . . . 27

2.5 Results . . . 32

2.6 Summary . . . 40

3 Practical Applicability of Adjoint Techniques to Coupled Problems 43 3.1 Theory . . . 44

3.2 Application . . . 56

3.3 Results . . . 60

3.4 Summary . . . 68

4 Adaptive Polynomial Chaos Techniques for Multi-Physics Problems 71 4.1 Introduction . . . 71

4.2 Theory . . . 74

4.3 Application . . . 102

(10)

Contents

4.5 Summary . . . 117

5 Large Scale Application of Adaptive PCE 121 5.1 Introduction . . . 121

5.2 Computational Cost Reduction Techniques . . . 122

5.3 Post-Processing PCE . . . 124

5.4 Application to a Gas Cooled Fast Reactor Transient . . . 126

5.5 Sources of Uncertainties . . . 132

5.6 Results . . . 141

5.7 Computational Costs and Cost Reduction Techniques . . . . 155

5.8 Summary . . . 159

6 Conclusions and Recommendations 163 6.1 Conclusions . . . 163

6.2 Recommendations . . . 165

Bibliography 167 A Notation 179 B Derivation of Perturbation Formulas and Adjoint Problems 181 B.1 Derivation of the Power Perturbation Expressions . . . 181

B.2 Derivation of the Eigenvalue Perturbation Expressions . . . . 183

C Constrained Quantities in Sensitivity and Uncertainty Ana-lysis 187 C.1 Constrained Sensitivities . . . 187

C.2 Covariance Matrix of Linearly Constrained Quantities . . . . 189

C.3 Sensitivities with Polynomial Chaos Expansion . . . 191

Summary 195

Samenvatting 197

Acknowledgements 201

List of Publications 203

(11)
(12)

Chapter 1

Introduction

This thesis is about the development of novel methods for the sensitivity and uncertainty (S&U) analysis of coupled multi-physics problems encountered in reactor physics. For understanding the main motivation behind the presented work in this chapter a general introduction to S&U analysis is given. First the relevance of sensitivity and uncertainty analysis within the domain of computational physics is briefly reviewed, then the most important traditional methods for performing S&U analysis are outlined. Finally, the main research topics namely adjoint methods and adaptive Polynomial Chaos techniques -are shortly discussed and an overview of the contents is given.

1.1

The Role of Computational Physics

Traditionally physics is divided into two equally important, complementary branches: theoretical and experimental physics. In theoretical physics funda-mental axioms are established, from which physical laws expressed as mathem-atical equations are derived to explain known phenomena and predict new ones. In reactor physics for example, the linear Boltzmann equation is obtained from the basic principle of particle conservation. The resulting transport equation seen in Figure 1.1 reflects this by stating that the change of the φ flux of neutrons (or the equivalent density of neutrons) having energy E traveling in direction Ω at any r point in space is equal to the difference between the ˆF φ

(13)

1. Introduction

source and the ˆLφ loss of such neutrons. By specifying the individual source

and loss terms - usually including fission in the former, leakage, absorption and scatter in the latter case - the transport equation can then be used to predict the neutron distribution in a nuclear reactor or the neutron dose beyond the surrounding biological shielding for instance.

In experimental physics practical techniques and methods are used, and experimental devices are constructed with which reproducible and quantitative measurements can be performed to check theoretical predictions, discover new phenomena and measure various physical quantities. For the nuclear field the assembly of cross section libraries is the most important example, which is an extensive experimental procedure. First, empirical data is collected and recorded about the cross sections (or probabilities) of particular reactions between neutrons and various nuclei, typically in multiple measurements, often done by several different experimentalists in independent laboratories. Then the raw data is analysed and combined with theoretical models, after which the results are reported in some structured format, such as the Evaluated Nuclear Data Files (ENDF) (ENDFB6 Format Manual). Finally, these files are assembled into comprehensive cross section libraries, containing the energy dependent cross sections of all types of reactions for a wide range of isotopes. In Figure 1.1 an example of the outcome of such a procedure can be seen, where the fission cross section of Pu-239 taken from the most recent ENDF/B-VII.1 evaluation is depicted (IAEA, 2014).

With the rapid development of computers in the 20th century computational physics became a third, equally important branch of physics. It focuses on developing and implementing fast, efficient and accurate numerical algorithms to solve the mathematical equations describing the behaviour of physical sys-tems. On one hand computational physics only complements the traditional disciplines, since today performing state of the art theoretical or experimental research is no longer possible without some - if not intensive - computational effort. Series expansions with hundreds of terms, calculating thousands of integrals and derivatives, or handling and processing vast amounts of exper-imental data are only a few of the examples where sophisticated numerical tools are required. On the other hand computational physics has a place of its own, since it enables the solution of problems for which analytical approaches are inapplicable and experimental procedures are impractical or impossible. It is safe to say that practically all real life systems of interest fall into this category, where performing measurements is too expensive and time consuming,

(14)

1.1. The Role of Computational Physics

)

, t

, E,

r

(

ˆ

)

, t

, E,

r

(

F φ

+

ˆ

)

, t

, E,

r

(

φ

∂t

=

v

1

Incident Energy (MeV)

1e-8 1e-6 1e-4 1e-2 1

0.1 1 10 100

1000 ENDF/B-VII.1

Pu-239 Fission Cross Section (barns)

Th

eo

re

tic

al p

hysics

Experimenta

l p

hy

sic

s

Co

mpu

tational phy

sic

s

Heat-exchanger D o wn-comer Primary circuit He Accumu-lator He Upper plenum DHR loop He Secondary circuit He/N2 Water H2O Lower plenum Active core

Figure 1.1: The three main branches of physics with examples from nuclear systems

and where even if the physical laws for all relevant processes are known, the governing equations are so complicated that they seldom have an analytical solution. Engineering applications provide plenty of examples, the typical one within the domain of reactor physics is the transient behaviour of a whole nuclear power plant, which is determined by several interacting phenomena, all depending on different properties and acting on different spatial and tem-poral scales. The neutronic and thermal-hydraulic evolution of the active core, thermo-mechanical changes due to thermal expansion, chemical processes, the effects of various primary, secondary or tertiary loop devices (such as pumps, heat-exchangers, turbines, valves, etc.) are all relevant for an accurate analysis, as are their delicate inter-dependencies. For such simulations usually several

(15)

1. Introduction

uni-physics computer codes have to be used, appropriate models have to be built, coupling interfaces have to be designed, etc. The result is a complicated model of the nuclear system (such as the one seen in Figure 1.1), which - if well built - captures all important details and can be used to calculate the anticipated response of the reactor to various incidents and accidental events.

1.2

Importance of Uncertainties in Simulations

Before the results obtained by simulations can be trusted it is imperative to ensure that they are representative of reality, i.e. that under the given circumstances reality would behave as predicted by our computational model. Two processes are aimed to address this issue, verification and validation (V&V) (Oberkampf and Trucano, 2008):

• Verification makes sure that the mathematical models are correctly pro-grammed in a computer code and are solved properly, hence it mainly focuses on the numerical approximations, discretization errors, trunca-tions, etc. The main tools of verification are the method of manufactured solutions, comparison with known analytical solutions, or comparison with highly accurate solutions of special or simplified cases.

• Validation makes sure that the computational model is physically accur-ate, i.e. that all relevant processes are included, the used mathematical simplifications and correlations are valid, and ultimately that the software is an acceptable representation of the real world from the perspective of its intended use. The key activities of validation are the quantitative comparison of computational results with experimentally measured data, the assessment of accuracy, and the estimation of accuracy for extrapol-ated use of the code (i.e. under conditions that fell outside those of the validation experiments).

Uncertainties - or in other words our lack of knowledge about the exact values of certain parameters and variables, numerical imprecisions and hidden physical processes - play an essential role throughout V&V activities. First, during verification the numerical uncertainties must be quantified and proven to be acceptably low. Second, during validation the conducted experiments must be accompanied by a rigorous quantification of the uncertainties, uncertainty

(16)

1.2. Importance of Uncertainties in Simulations

information should be provided in terms of intervals, incomplete or complete probability distributions for as many conditions and measured quantities as possible. Third, when comparing computational results with experimental data the imprecise knowledge about the values of the input parameters have to be taken into account by propagating the input uncertainties to the computed outputs, this last action is called uncertainty propagation.

Uncertainties are commonly divided into two groups: aleatoric and epistemic uncertainties. Aleatoric uncertainties affect parameters which are inherently stochastic and therefore their value is impossible to determine. For example the imprecise representation of numbers on computers, fluctuations in weather conditions or neutron noise due to the stochasticity of nuclear reactions are all aleatoric uncertainties. Epistemic uncertainties affect parameters whose values are theoretically possible to know exactly, however they are not, most often due to the lack of experimental data. In reactor physics the most typical examples are cross section uncertainties caused by measurement uncertainties (see Figure 1.2). A key difference between the two classes is that while epistemic uncertainties can be decreased with better measurements, aleatoric uncertainties can not due to their inherent stochastic nature.

In the terminology of verification and validation activities the general terms of uncertainty analysis (UA) and uncertainty quantification (UQ) encompass all of the aforementioned topics related to uncertainties. However, they are frequently used in a narrower sense as synonyms for uncertainty propagation, focusing only on data type epistemic uncertainties and their effects on the computed outputs. Sensitivity analysis (SA) is a closely related, almost inseparable discipline, which studies the connection between the inputs of a model or experiment and the calculated or measured outputs. It can provide a deeper understanding of the problems being investigated, can reveal the main mechanisms in complex systems, furthermore it can identify the importance of the different parameters and their contribution to the output uncertainties. Throughout this thesis these two phrases will be used with these meanings, i.e. sensitivity analysis referring to determining the connection between the outputs and inputs, and uncertainty analysis referring to the propagation of input uncertainties to the computed outputs, respectively.

As computational physics plays an increasingly central role already in the planning phase of most engineering projects, verification and validation activ-ities in general, and sensitivity and uncertainty analysis techniques in specific

(17)

1. Introduction

Incident Energy (MeV)

Pu-239 Fission Cross Section (barns)

10

-8

10

-6

10

-4

10

-2

1

10

-1

1

10

10

2

10

3 ENDF/B-VII.1 2005 Rochman 1980 Wagemans 2010 Tovesson 2010 Tovesson 2006 Rochman 2006 Rochman

Figure 1.2: Example of uncertainty for the Pu-239 fission cross section. Ex-perimental data from multiple measurements are shown together with the ENDF/B-VII.1 evaluation. Data taken from (IAEA, 2014).

are expected to gain more importance as well. This is especially true for the nuclear field, where S&U analysis has long been an essential part of the design and operation nuclear installations (Usachev, 1964; Gandini, 1967; Stacey, 1974; Williams, 1986; Ronen, 1988; Cacuci, 2003), for two main reasons. First, simulations are routinely used to ensure the safety of reactors during normal and off-normal conditions, and inaccurate predictions from such calculations can clearly have severe consequences. Second, cross sections, the basic data for reactor calculations are characterized with significantly higher uncertainties than most other physical quantities, as exemplified by Figure 1.2 (note the logarithmic scale of the cross section). As in recent years regulatory bodies started to shift their licensing approach from conservatism (i.e. the use of large engineering margins to failure point in order to make up for hidden or unknown

(18)

1.3. Traditional Methods for Sensitivity and Uncertainty Analysis

processes) to Best Estimate Plus Uncertainty (BEPU) methodologies (Wilson, 2013), where the best possible estimations are made augmented by rigorous sensitivity and uncertainty analysis, the emphasis on uncertainties has only intensified. Since most S&U analysis methodologies are either computationally expensive or involve significant additional modelling and coding efforts, the development of affordable, accurate and easily applicable S&U analysis tech-niques is expected to remain a prominent issue, and is the main topic of this thesis.

1.3

Traditional Methods for Sensitivity and

Uncertainty Analysis

There are several well established methods for performing sensitivity and uncer-tainty analysis. The main distinction is usually made between stochastic and deterministic methods. Stochastic, or in other words sampling based methods use repeated execution of the same simulation with different realizations of the uncertain input parameters. Therefore they are conceptually simple and relatively easy to perform, however at the cost of significant computational overhead. Deterministic methods rely on solving a reformulated version of the problem at hand to incorporate uncertainties. As a result they are in general theoretically more complicated than stochastic techniques and require substantial analysis and additional code development, however they are com-putationally cheaper. In the rest of this section a brief overview of the most often used S&U analysis techniques is given.

1.3.1 Stochastic Methods

Stochastic techniques address uncertainties in a very straightforward way by simulating different “possible scenarios” according to the probability of their occurrence and using the results to draw conclusions about the randomness of the quantities of interest. Correspondingly, all these methods follow the same three steps:

• Generation of input parameters: during this step a sufficient number of random realizations of the α input parameters are generated, according to their pα(α) joint probability density function (α ⊂ RN represents the

(19)

1. Introduction

N separate input parameters). The result is the {αi : i = 1, ..., Ns} set, containing Ns different values for each input parameter.

• Propagation of samples: during this step the same simulation is executed

Ns times, each time with a different αi realization of the inputs. The

outcome is Ns realizations of the R (α) ⊂ RNR quantities of interest,

constituting the {αi → Ri = R (αi) : i = 1, ..., Ns} mapping.

• Post-processing of results: in this step the corresponding input-output pairs are used to compute response moments (mean, standard deviation, skewness, etc.), correlations (correlation and covariance matrices), com-plete probability density functions or cumulative distribution functions (PDFs or CDFs), draw scatterplots, perform regression analysis and statistical tests, determine sensitivity coefficients (Pearson or partial correlation coefficients), etc.

The most basic of the stochastic S&U analysis techniques is the brute force Monte Carlo (MC) approach, which uses standard methods for sampling the

pα(α) joint probability density function of the inputs. In this case it is easy to construct unbiased estimators for the quantities of interest, for example the mean and the variance of any particular response R can be calculated

as ˜µR= 1/Ns Ns X i R (αi) and ˜σR2 = 1/ (Ns− 1) Ns X i (R (αi) − ˜µR)2, respectively. The main drawback is that the convergence of these estimators is slow, the error typically decreases only according to 1/pNs. Therefore several techniques have been designed to improve the convergence rate of the brute force MC approach via “smarter” sampling. Latin Hypercube Sampling (LHS) and Stratified Sampling for example achieve this by ensuring a better coverage of the input phase space (Helton and Davis, 2003; Helton et al., 2005; Gajev, 2012), while adaptive Monte Carlo methods use the information from previous samples to adaptively draw quasi-random samples of the inputs (Karaivanova and Dimov, 1998; Dimov and Georgieva, 2010).

The slow convergence of MC techniques is not an issue if one is only interested in confidence intervals, i.e. the ranges where output quantities lie with a given probability. The method to produce confidence intervals with relatively few random samples of the inputs was first introduced by Wilks (Wilks, 1942), and later was extended and is known today as the GRS methodology (Wald, 1943; Macian et al., 2007; Vinai et al., 2007; Glaeser, 2008). While this approach is

(20)

1.3. Traditional Methods for Sensitivity and Uncertainty Analysis

very helpful for regulatory purposes, unfortunately it provides little information about anything other than the constructed confidence intervals.

The major advantage all stochastic methods share is that the convergence rate of the moments of quantities of interest does not depend strongly on the N number of input parameters. This, together with their relatively easy implementation makes them very attractive for practical applications, despite their slow convergence. Moreover, for truly large scale problems with thousands of input parameters and strong nonlinearities they often represent the only viable solution for performing S&U analysis. As a result they are regularly used in the industry, as well as in research, where they are frequently considered as computational benchmarks against which other methods are compared and verified.

1.3.2 Deterministic methods

Deterministic methods are most often based on the assumption that the output quantities of interest can be represented (at least to a certain accuracy) as some analytical function of the uncertain input parameters. Once this assumption is made a corresponding new mathematical problem is constructed with which the functional dependence can be determined.

The most significant work has probably been done on perturbation techniques (Ronen, 1988; Williams, 1986; Cacuci, 2003), where the assumption is that the particular response of interest (R) is well approximated by a Taylor series around the α0 reference values of the input parameters, i.e. that:

R (α) = Rα0+ N X j=1 ∂R ∂αj 0  αj− α0j  + N X j=1 N X k=1 2R ∂αj∂αk 0  αj− α0j   αk− α0k  + ... (1.1)

Naturally the expansion has to be truncated, and the accuracy of this ap-proximation, as well as the associated computational costs strongly depend on how many terms are included. First order perturbation techniques only take into account the linear terms, whereas higher order techniques are usually truncated at the second order.

(21)

1. Introduction

Several methods are available for obtaining first order information. The most straightforward applies one-at-a-time perturbations of the individual input parameters (OAT method) and uses finite differences to calculate the first order derivatives (Ronen, 1988; Saltelli et al., 2000). A different approach is the Forward Sensitivity Analysis Procedure (FSAP), where a linearized version of the problem is solved for a perturbed set of input parameters (Cacuci, 2003; Zhao and Mousseau, 2012). For small systems this can be derived analytically, while for large scale applications automatic differentiation algorithms can be used, although these provide some challenges from the implementation point of view (Anitescu et al., 2007; Alexe et al., 2010). From all first order perturbation methods adjoint techniques (or ASAP, standing for Adjoint Sensitivity Analysis Procedure) are by far the most successful (Williams, 1986; Cacuci, 2003). Their main advantage is that the number of calculations depends on the NRnumber of output quantities of interest instead of the N number of input parameters. This makes them ideal for many applications, especially if the underlying problem is linear or only mildly nonlinear (which explains their great success in the nuclear field). They are, however, conceptually more difficult than other techniques and may involve significant code development efforts, particularly for nonlinear systems.

A few papers have been published on higher order perturbation methods (Gandini, 1978; Moore and Turinsky, 1998; Gilli et al., 2013a). The general conclusion of these studies is that while the accuracy of predictions can be improved, it comes at the cost of significantly higher code development and computational times. Higher order perturbation techniques have therefore remained an academic interest only, so far their challenging application to larger scale problems has mostly prevented them from practical use.

Recently, a very promising new approach has been presented, which is based on efficient subspace methods (ESM) and aims at identifying the perturbation modes which are most essential for the responses of interest (Abdel-khalik et al., 2008). This is done by the extensive use of matrix rank revealing decompositions at each level of the (multi-physics) simulations to find low rank approximations of the large dense matrices associated with the problems, without having to first assemble these. The works published so far show great potential (Wang and Abdel-Khalik, 2011; Kennedy et al., 2012; Bang and Abdel-Khalik, 2013; Wang and Abdel-Khalik, 2013; Abdel-Khalik et al., 2013), but their practical usefulness is still to be proved.

(22)

1.4. Contents and Overview of This Thesis

Finally, spectral methods represent another set of deterministic techniques which have received a lot of attention lately. The original idea was developed by Wiener already in 1938 (Wiener, 1938), but their practical application in fluid dynamics (Xiu and Karniadakis, 2003), structural mechanics (Ghanem and Spanos, 1991) or neutron transport (Williams, 2007) only started 10-15 years ago. Spectral methods are based on the idea of representing all stochastic quantities (i.e. both inputs and outputs) as spectral expansions, similar to classical Fourier series. For this one has to define appropriate basis functions and needs to determine the expansion coefficients of the series. Once this is done the original problem can be substituted by the constructed expansions, giving a direct connection between inputs and outputs. Spectral methods can be applied in both an intrusive and a non-intrusive fashion: the former involves dedicated code development and the definition of a new mathematical problem to be solved (i.e. different from the original problem), while the latter - similarly to statistical approaches - requires repeated execution of the same

computer program with different values of the inputs.

1.4

Contents and Overview of This Thesis

The focus of this thesis is on the development of efficient sensitivity and un-certainty analysis techniques specifically for coupled multi-physics problems encountered in the nuclear field. The primary motivation for further research on S&U analysis comes from the need to accommodate the expanding use of simulations in both traditional and non-traditional fields. High resolution tightly coupled multi-physics and multi-scale calculations are becoming pro-gressively widespread in engineering (Ragusa and Mahadevan, 2009; Gaston et al., 2009; Hales et al., 2012; Guo et al., 2013; Keyes et al., 2013) and computational tools are increasingly used in interdisciplinary applications as well (e.g. in the medical field). Although the available computational capab-ilities continue to advance, the growing complexity and nonlinearity of such calculations often make existing stochastic methods prohibitively expensive, at the same time known deterministic approaches inaccurate or not applicable. Therefore research on appropriate sensitivity and uncertainty analysis methods targeted to these kinds of simulations is not only of scientific interest, but also of significant practical relevance.

The first half of this thesis focuses on coupled criticality calculations, as they represent one of the most frequent problems to be solved in the nuclear

(23)

1. Introduction

field. In such simulation usually neutron transport or diffusion is coupled to thermal-hydraulics, but other phenomena can be taken into account as well, such as fission product poisoning, crud formation, etc. To accommodate the sensitivity and uncertainty analysis of these system this thesis presents an extension of classical adjoint methods. The main rationale behind doing so is the immense success of adjoint techniques for pure transport problems, the already available computer programs capable of solving the adjoint transport equation and the recent and anticipated future code development efforts for adjoint capable thermal-hydraulics codes (Fang et al., 2011; Farrell et al., 2013).

In the second half of this thesis transients (i.e. the time-dependent behaviour of reactors) are targeted, since they present the second most common type of coupled calculations in reactor physics. Our method of choice is the non-intrusive spectral projection approach, since it is easily applicable to arbitrary problems and recent studies have shown promising convergence properties in various systems (Blatman and Sudret, 2011; Gilli et al., 2013b). These two attributes make it potentially highly useful for the S&U analysis of accidental events, especially in cases where legacy or proprietary codes are used to simulate transients, therefore code modification is not possible.

The structure of this thesis is as follows. In Chapter 2 the theoretical basis of the adjoint sensitivity analysis of coupled criticality problems is laid out. It is shown that the effects of perturbations can be interpreted in three different ways: either by letting the steady state power to change, or by constraining the power and letting the eigenvalue differ from unity, or by adjusting a control parameter in parallel to preserve the criticality of the core at the desired power level. To derive sensitivities a coupled adjoint problem needs to be solved. The numerical aspects of this problem are also addressed and a solution recipe is given which relies on Krylov techniques and employing the individual neutronics and augmenting codes as preconditioners. Finally a proof of principle study is presented using a homogeneous one-dimensional slab with two-group diffusion theory, coupled to heat conduction and xenon poisoning.

In Chapter 3 the larger scale applicability of the coupled adjoint theory is demonstrated. For this an in-house developed, general purpose finite element multigroup discrete ordinates transport code is used, coupled to a purpose made adjoint capable thermal-hydraulics code. As a more realistic system an infinite array of fuel pins is simulated, where neutron transport determines the

(24)

1.4. Contents and Overview of This Thesis

power distribution in the pin, and the temperature distribution has a feedback on the multigroup cross sections. The major merit of this chapter is that it addresses the main implementation issues of the coupled adjoint theory, and reveals that only minor modifications are needed in the transport and the augmenting codes, as for most needed actions the already available functions of the separate codes can be reused. The most significant effort lies in the development of the coupling scheme, and not in the development of the coupled adjoint scheme.

In Chapter 4 we turn our attention to the development of novel adaptive spectral methods, namely grid and basis adaptive Polynomial Chaos Expansion (PCE) techniques. The novelty of this chapter is twofold. First, an adaptive numerical integration technique is devised based on the Gerstner algorithm, which aims at efficiently calculating all multidimensional integrals defining the expansion coefficients of the Polynomial Chaos (PC) basis vectors. Second, the issue of basis adaptivity is investigated and it is shown that by adaptively choos-ing the basis vectors to be included in the PCE the associated computational costs can be decreased significantly. The chapters finishes with the application of the developed Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm to three demonstrational problems, each showing its effectiveness compared to both MC techniques as well as non-adaptive PC methods.

In Chapter 5 the truly large scale applicability of the FANISP algorithm is demonstrated by performing the S&U analysis of a Gas Cooled Fast Reactor (GFR) transient, with a completely detailed system model of the GFR2400 reactor design and 42 input uncertainties. To address so many input parameters first two cost reduction techniques are presented, one based on an incremental increase of the global polynomial order of the used expansion, the other based on an automatic dimensionality reduction using the information about the individual effects of the inputs on the output of interest. Next the used system model of the reactor is presented, with a detailed description of the incorporation of the uncertainties. Finally the results are analysed, showing the versatility and excellent performance of the FANISP algorithm.

As a closure of this thesis, Chapter 6 contains some concluding remarks about the prospects of practical applications of the developed adjoint and spectral methods, together with some recommendations for future research.

(25)
(26)

Chapter 2

Adjoint Techniques for

Coupled Criticality

Problems

2.1

Introduction

The adjoint based sensitivity analysis of neutron transport problems is a powerful tool to investigate changes in responses of interest due to variations of input parameters characterizing a system. Starting from the early works of Usachev (1964) and Gandini (1967), the theory of calculating perturbations in the critical eigenvalue of reactors and functionals of the flux has been being developed continuously and is well established today (Stacey, 1974; Williams, 1986). In parallel the implementation of traditional first order perturbation theory capabilities has become common practice, with an increasing number of commercial codes offering options for calculating various sensitivities using generalized perturbation theory (GPT) (Rimpault et al., 2002; Jessee et al., 2009). The sensitivity analysis of coupled problems is however still not a daily practice, despite the extensive work done on various coupled systems.

One of the most researched topics regarding coupled reactor physics problems is fuel depletion. Gandini was the first to consider adjoint techniques for the sensitivity analysis of burn-up, without taking into account the coupling

(27)

2. Adjoint Techniques for Coupled Criticality Problems

between the flux and the changing nuclide concentrations (Gandini, 1975). His work was later extended by Williams (1979) to include this feedback, applying a quasi-static approximation for the forward problem to decouple the neutron and nuclide fields. The theory has been further developed to accommodate other circumstances, such as changing microscopic cross sections typical in light water reactors (Downar, 1992), constant power depletion following real life situations more closely than constant flux depletion (Yang and Downar, 1988), or fuel shuffling and reloading for example (Gandini, 1987; Choi and Downar, 1992).

Transient situations represent another area where coupling is evidently necessary. In such works most often point-kinetics coupled to simplified thermal-hydraulics is considered and adjoint techniques are used to calculate sensitivities (Parks and Maudlin, 1981; Gilli et al., 2011). Some larger-scale applications of Adjoint Sensitivity Analysis Procedure in the field have also been published (Cacuci and Ionescu-bujor, 2005; Ionescu-Bujor et al., 2005).

A few papers deal with the sensitivity analysis of shielding problems, in which coupled neutron-gamma transport is performed and the adjoint transport equation is used to generate sensitivities (Roussin and Schmidt, 1971; Bartine et al., 1972; Williams and Goluoglu, 2007). However, these works only consider one-way coupling, i.e. secondary gamma particles induced by neutrons are taken into account, gamma-neutron interactions - such as photo-fission or neutron separation caused by high energy photons - are neglected.

In this chapter the sensitivity analysis of coupled criticality calculations is discussed with two-way coupling between neutronics and one or more aug-menting systems. The most relevant work to this research was done by Moore and Turinsky (1998), who used generalized perturbation theory to efficiently calculate boiling water reactor loading pattern characteristics with thermal-hydraulic feedbacks. Here however, all variables describing the augmenting systems are taken into account and no a priori assumption is made on the strength of feedback from the different augmenting unknowns. This becomes especially important when the derived sensitivities are used to accurately calcu-late uncertainties on various reactor parameters, since it enables all uncertain input data to be accounted for, i.e. the uncertainty of both the neutronics and the augmenting parameters can be propagated to the final responses. Moreover, for the calculation of the necessary adjoint functions a procedure is introduced here, which relies on using the neutronics and the augmenting codes separately.

(28)

2.2. Theory

In Section 2.2 the theoretical background is presented with three different ways for interpreting perturbations, namely the power-, the eigenvalue- and the control parameter perturbation approaches. In Section 2.3 the numerical aspects of calculating the appropriate adjoint functions are investigated and a technique is proposed which uses the neutronics and augmenting codes as a preconditioner to Krylov methods applied to the system of coupled adjoint equations. Section 2.4 presents a one-dimensional slab problem with thermal and fission product feedback for which the derived methods were applied, while in Section 2.5 some results are shown. Concluding remarks are included in Section 2.6.

2.2

Theory

2.2.1 Formulation of the Coupled Criticality Problem

Criticality problems in reactor physics can be described by Equation 2.1:

ˆ

L (αn) φ(x) = λ ˆF (αn) φ(x). (2.1)

Here x is a vector representing the independent variables (point in space, energy, direction), φ (x) and λ are the neutron flux and the critical eigenvalue (the dependent variables), while ˆL and ˆF are the usual loss and fission operators. αn

stands for the different input parameters (cross sections, geometrical boundaries, etc.). Equation 2.1 can describe any form of the neutron transport equation (SN,

PL, diffusion) with any discretization scheme, x and φ (x) simply take the form of the appropriate independent and dependent variables (e.g. x representing spatial points and energy groups, φ (x) group fluxes in the volumes in case of a multigroup diffusion approximation with finite volume discretization).

The solution of Equation 2.1 is the critical eigenvalue and the corresponding eigenfunction φ (x), which can be arbitrarily normalized. This normalization can be expressed as

hCf(x) , φ (x)iφ

C = 1,

where Cf(x) is a chosen constraint function, C is a pre-set constraint value, while h , iφ indicates integration over the phase space. Most commonly the flux is normalized to a certain power level P and only heat from fission is taken into account, in this case the constraint value is the power (C = P ) and the constraint function takes the form of Cf(x) = Pf(x) = QΣf(x), where Q

(29)

2. Adjoint Techniques for Coupled Criticality Problems

is the energy released per fission. Other normalizations are also possible, e.g. to a detector response, to one fission-neutron, etc.

In a coupled criticality problem physical processes other than neutron transport are taken into account as well. The description of these extra phenomena requires additional input parameters (αT), dependent variables (T ) depending on independent variables y possibly different from x, and equations ( ˆN ). There can be mutual feedback between the systems being coupled, hence some of the original neutronic parameters can become functions of the additional parameters and dependent variables. For example, if coupling to thermal-hydraulics is considered, T can represent the temperature field, coolant density, void fraction, ˆN would stand for the equations describing heat transfer,

two-phase flow, etc., while y would simply be the point in space. In the work of Cacuci and Ionescu-bujor (2005) a very accurate and detailed methodology is presented how to transform an uncoupled model into an augmented one. Here a simpler, but equally accurate approach is used: a complete set of input parameters (α) is considered containing all parameters necessary to describe neutron transport, the additional phenomena and the feedbacks, moreover the neutronics operators are allowed to depend explicitly on the augmenting dependent variables. Without loss of generality in the rest of the theoretical description we will consider a single augmenting phenomenon, requiring one additional dependent variable T and equation ˆN .

The coupled criticality problem can be formulated by Equations 2.2-2.4: ˆ Lα, Tyφ (x) = λ ˆFα, Tyφ (x) (2.2) ˆ Nα, Ty, φ (x)= 0 (2.3) D Cfα, T y, φ (x)E φ C = 1. (2.4)

The loss and fission operators, as well as the constraint function now also depend on the augmenting dependent variables, whereas the augmenting equations are only indicated by a general operator ˆN acting linearly or nonlinearly on

the augmenting dependent variables, the flux and the input parameters. For simplicity in the rest of this chapter the dependence of φ and T on independent variables x and y will not be explicitly shown.

With feedback between the augmenting system and neutronics, Equa-tions 2.2-2.3 have a unique solution: the real physical steady state for which

(30)

2.2. Theory

λ = 1. In this case the flux normalization is no longer arbitrary and the

constraint value C cannot be freely chosen, but is simply given by Equation 2.4. For example, in case of coupling to thermal-hydraulics, this would correspond to the total thermal power of the reactor at steady state for the given config-uration. When Equations 2.2-2.4 are considered and the flux is normalized to the chosen constraint value the eigenvalue may differ from 1, showing the off-criticality of the system at the prescribed conditions. Again thinking of thermal coupling this would translate to whether the reactor can be operated at a given power level or not.

Based on the above, perturbations in coupled criticality problems can be considered in three different ways. Either the perturbed steady state is searched for and no arbitrary flux normalization is allowed (power perturbation); or the flux is constrained to the unperturbed constraint value and the k-effective is allowed to change (eigenvalue perturbation); or (multiple) perturbations are made so that the constraint value and the eigenvalue are both unchanged (con-trol parameter perturbation). Power perturbation corresponds to the situation when feedbacks adjust the system to a new steady state after perturbations are made, which can be used for optimization purposes in the design of reactors. With eigenvalue perturbation the difference in the k-effective of the perturbed and the reference configuration is calculated making it suitable to determine reactivity worths taking into account feedbacks. At last, control parameter perturbation is most useful for calculating sensitivities in operational conditions when the power is constrained and the reactor has to be critical. In the next three sections these three possibilities are examined.

2.2.2 Power Perturbation

For any (sensible) set of input parameters α Equations 2.2-2.3 always have a unique solution due to the feedbacks present between neutronics and the augmenting system. This is the true physical steady state, for which λ = 1. Therefore the problem can be formulated as

ˆ

L(α, T )φ = ˆF (α, T )φ

ˆ

N (α, T, φ) = 0,

and for a given parameter set α0 the solution is φ0 and T0 describing the steady state. The corresponding unique power (constraint) value is

P0 = C0 =DCf(α0, T0), φ0 E φ= D Pf(α0, T0), φ0 E φ.

(31)

2. Adjoint Techniques for Coupled Criticality Problems

Perturbations made to the input parameters (∆α) result in a different steady state, a possibly different power (constraint) value and changes in other responses of interest as well. These variations can be efficiently calculated by applying the standard adjoint sensitivity analysis procedure (Cacuci, 2003). Here only the final equations that need to be solved and the formulas for calculating the response changes are presented, however in Section B.1 a short summary of the full derivation is given. For notation see Chapter A.

The relative change in the power (constraint) value - up to first order - is given by Equation 2.5, where the ∆P∆αdirect/P0 = 1/P0

∂Pf/∂α|0∆α, φ0

φ

direct effect has been separated:

∆P∆α P0 = ∆P∆αdirect P0 + 1 P0 ∂P f ∂T 0 ∗ φ0, ∆T  T + 1 P0 D Pf0, ∆φE φ. (2.5)

As usual, we are concerned about the indirect terms containing ∆φ and ∆T in Equation 2.5 and want to avoid recalculating the coupled problem for every input parameter perturbation. This can be achieved by solving the following adjoint problem for wφ0 and w0T:

 ˆ M0∗wφ0 + ∂ ˆN ∂φ 0 !∗ wT0 = w 0 P P0P 0 f (2.6)  ˆ J0∗wφ0 + ∂ ˆN ∂T 0 !∗ wT0 = w 0 P P0 ∂P f ∂T 0 ∗ φ0. (2.7)

In Equations 2.6-2.7 w0P is an arbitrarily chosen constant and we introduced ˆ

M0 = ˆL0 − λ0Fˆ0 and ˆJ0∆T = ∂ ˆM /∂T 0∆T φ

0 (with λ0 = 1 in both

ex-pressions). The coupled adjoint operator on the left is basically identical to that used by Moore and Turinsky (1998), with the exception that here no a priori assumption is made on the strength of the feedback from the different augmenting variables. The boundary conditions usually need a more detailed discussion, here it is only emphasized that they have to be chosen in a way that the bilinear terms coming from the definition of the adjoint operators vanish, or at least become dependent only on the unperturbed solution, the known parameters and the parameter changes (Cacuci, 2003).

(32)

2.2. Theory

change in the power (constraint) value can be calculated by Equation 2.8:

∆P∆α P0 = ∆P∆αdirect P0 − 1 w0 P * w0T, ∂ ˆN ∂α 0 ∆α + T − 1 w0 P * wφ0, ∂ ˆM ∂α 0 ∆αφ0 + φ . (2.8)

The steady state power is not the only quantity we are interested in. For response functionals R (α, T, φ) the change caused by the perturbation of the input parameters can be written as

∆R∆α= ∂R ∂α 0 ∆α + ∂R ∂T 0, ∆T  T + ∂R ∂φ 0 , ∆φ  φ = ∆Rdirect∆α + ∆Rindirect∆α .

Our main concern is again the evaluation of the indirect term, for which the appropriate adjoint problem that needs to be solved takes the form of

 ˆ M0∗wPφ + ∂ ˆN ∂φ 0 !∗ wPT = ∂R ∂φ 0 (2.9)  ˆ J0∗wPφ + ∂ ˆN ∂T 0 !∗ wPT = ∂R ∂T 0. (2.10)

Having obtained the solution of Equations 2.9-2.10, the indirect term of the response change can be evaluated, leading to the following expression for the total perturbation: ∆R∆α= ∆Rdirect∆α − * wTP, ∂ ˆN ∂α 0 ∆α + T − * wφP, ∂ ˆM ∂α 0 ∆αφ0 + φ . (2.11)

As in practice reactors are usually operated at a given power level it is worth calculating sensitivities when the power is constrained (or a chosen constraint value has to be met). This can be done if parallel to the perturbation of the input parameters for which the sensitivities are to be calculated (∆α) one or more other parameters are perturbed as well, such that the power in the perturbed state is equal to that in the unperturbed state. This is very similar to the traditional k-reset procedure used in standard GPT (Williams, 1986). When a single parameter is used to reset the power, this can be designated as a control parameter (which will be denoted as αc in the remainder of this

(33)

2. Adjoint Techniques for Coupled Criticality Problems

caused by the ∆α perturbation and the ∆αc control parameter change cancel each other, i.e. that

∆P∆α

P0 = −

∆P∆αc

P0 .

Equation 2.8 can be used to evaluate the power change of both perturbations, leading to the following formula for the change in the control parameter:

∆αc= − wP0 ∆P direct ∆α P0 − * wT0, ∂ ˆN ∂α 0 ∆α + T − * wφ0, ∂ ˆM ∂α 0 ∆αφ0 + φ w0P P0 ∂Pf ∂αc 0 , φ0  φ − * w0T, ∂ ˆN ∂αc 0 + T − * w0φ, ∂ ˆM ∂αc 0 φ0 + φ . (2.12)

Once the changes in all input parameters are known the power-constrained sensitivities can be calculated by using Equation 2.11 to evaluate the response change due to both the original perturbations and the control parameter variation (∆Rdirect∆α+∆αc = ∆Rdirect∆α + ∆Rdirect∆αc is the total direct change):

∆R∆αP −reset= ∆Rdirect∆α+∆αc− "* wPT,∂ ˆN ∂α 0 ∆α + T + * wTP, ∂ ˆN ∂αc 0 ∆αc + T # −   * wPφ,∂ ˆM ∂α 0 ∆αφ0 + φ + * wφP, ∂ ˆM ∂αc 0 ∆αcφ0 + φ  . (2.13) 2.2.3 Eigenvalue Perturbation

In the sensitivity analysis of reactor physics problems one of the prime interests is the change of the critical eigenvalue due to perturbations of the input parameters, and this is also the case for coupled criticality calculations. When feedbacks are present the eigenvalue only differs from unity if the flux is constrained, hence the problem can be described by

ˆ L(α, T )φ = λ ˆF (α, T )φ ˆ N (α, T, φ) = 0 hPf(α, T ), φiφ P0 = 1,

(34)

2.2. Theory

where P0 is the pre-set power (constraint) value. In the unperturbed system (characterized by a parameter set α0) the eigenvalue is λ0 = 1 and the

corres-ponding steady state solution is φ0 and T0. Perturbing the input parameters while constraining the flux to the power of P0, the eigenvalue may differ from

λ0, this difference can be interpreted as the reactivity worth of the perturbation

and can be calculated as

∆λ = −w0P∆P direct ∆α P0 + * w0T, ∂ ˆN ∂α 0 ∆α + T + * w0φ, ∂ ˆM ∂α 0 ∆αφ0 + φ D wφ0, ˆF0φ0E φ . (2.14)

In Equation 2.14 w0φ and w0T are the solutions to Equations 2.6-2.7 for the arbitrarily chosen wP0. The derivation of Equation 2.14 and the corresponding adjoint problem is quite similar to that presented by Williams (1986) for pure criticality problems, and is fully detailed in Section B.2. As expected, Equation 2.14 reduces to the standard eigenvalue perturbation formula when no coupling is present.

When functional responses other than the critical eigenvalue are considered, the adjoint problem that needs to be solved is

 ˆ M0∗wφλ + ∂ ˆN ∂φ 0 !∗ wTλ = w λ P P0P 0 f + ∂R ∂φ 0 (2.15)  ˆ J0∗wφλ + ∂ ˆN ∂T 0 !∗ wTλ = w λ P P0 ∂P f ∂T 0 ∗ φ0 + ∂R ∂T 0 , (2.16)

with the auxillary condition that

D

wφλ, ˆF0φ0E φ= 0.

The latter condition is needed so that the term ∆λDφ, ˆF0φ0E

φ in the

de-rivation disappears. As in the coupled case the neutronic adjoint is unique (unless the coupled adjoint operator is singular), this can be achieved by the proper choice of wλP (see Section B.2). Having obtained the solution of Equa-tions 2.15-2.16 the response variation due to the perturbation in the input

(35)

2. Adjoint Techniques for Coupled Criticality Problems

parameters can be calculated according to Equation 2.17:

∆R∆α= ∆Rdirect∆α +w λ P P0 ∂Pf ∂α 0 ∆α, φ0  φ − * wTλ, ∂ ˆN ∂α 0 ∆α + T − * φ, ∂ ˆM ∂α 0 ∆αφ0 + φ . (2.17)

Just like in standard GPT, one can apply a k-reset procedure to obtain criticality constrained sensitivities. Equation 2.14 can be used to evaluate both sides of ∂λ ∂α 0 ∆α = − ∂λ ∂αc 0 ∆αc,

leading to the same formula for the control parameter change as Equation 2.12. Finally, Equation 2.17 can be used to evaluate the change in the response both due to the input parameter perturbations and the control parameter change, leading to the following expression for the criticality-constrained sensitivities (which coincide with the power-constrained sensitivities):

∆Rk−reset∆α = ∆Rdirect∆α + ∆Rdirect∆αc +w

λ P P0 ∂Pf ∂α 0 ∆α + ∂Pf ∂αc 0 ∆αc  , φ0  φ − * wTλ, " ∂ ˆN ∂α 0 ∆α + ∂ ˆN ∂αc 0 ∆αc #+ T − * wφλ, " ∂ ˆM ∂α 0 ∆α + ∂ ˆM ∂αc 0 ∆αc # φ0 + φ . (2.18)

2.2.4 Control Parameter Perturbation

A direct way to calculate constrained sensitivities is to consider the problem with the constraint from the beginning and make simultaneous perturbations to the desired α parameters and the αc control parameter in order to preserve

criticality at the unperturbed power (constraint) value. Applying the standard ASAP again the obtained formula for the control parameter change is the same as Equation 2.12 and the adjoint problem needed to be solved is identical to Equations 2.6-2.7. For responses other than the control parameter the adjoint problem is similar to Equations 2.9-2.10, but the sources are more complicated as they contain the extra terms coming from the constraint, furthermore there is an extra equation to solve to obtain the adjoint corresponding to the constraint

(36)

2.3. Solution of the Coupled Adjoint Problem

equation. The resulting response changes are however exactly the same as the power-reset and k-reset response variations in the power and eigenvalue perturbation procedures, hence the remainder of this chapter focuses on these two approaches, for the details see (Perkó et al., 2013).

2.3

Solution of the Coupled Adjoint Problem

The different perturbation procedures lead to very similar coupled adjoint problems. With the exception of the control parameter perturbation, for a given value of wP they are all of the form of Equations 2.19-2.20:

 ˆ M0∗ + ∂ ˆN ∂φ 0 !∗ wT = ∗ (2.19)  ˆ J0∗ + ∂ ˆN ∂T 0 !∗ wT = ST. (2.20)

One way of dealing with the above problem is to develop new coupled neutronics codes which are capable of directly solving Equations 2.19-2.20. However from a programming point of view it is much easier to use existing neutronics and augmenting codes capable of solving the separate adjoint problems and take care of the coupling via source terms in an iterative way. Thus only the proper calculation of the adjoint sources is needed and the lengthy and complicated development of new codes can be avoided.

At first glance it seems tempting to introduce the following block iterative scheme for solving Equations 2.19-2.20 (k is the iteration index):

 ˆ M0∗w(k+1)φ = Sφ∗ − ∂ ˆN ∂φ 0 !∗ w(k)T ∂ ˆN ∂T 0 !∗ w(k+1)T = ST∗ − Jˆ0∗w(k+1)φ .

One problem with this approach is that the Mˆ0∗ adjoint transport op-erator is singular. This issue is well known in traditional GPT and limits its use to a certain class of responses (Williams, 1986). From elementary linear algebra it follows that the inhomogeneous adjoint transport equation can only be solved if the source is orthogonal to the forward solution φ0

(37)

2. Adjoint Techniques for Coupled Criticality Problems

As a consequence the block iteration described above could only be conver-gent ifDSφ∗−∂ ˆN /∂φ

0 ∗

w(k)T , φ0E

φ = 0 held in every step of the iteration,

which would most probably not be the case. To overcome the obstacle Equa-tions 2.19-2.20 can be recasted in the following form:

 ˆ Mζ0∗ + ∂ ˆN ∂φ 0 !∗ wT = ∗ + λ0ζ  ˆ F0∗w φ  ˆ J0∗ + ∂ ˆN ∂T 0 !∗ wT = ST.

Here ˆMζ0 = ˆD0− λ0(1 − ζ) ˆF0 was introduced, which is no longer singular for ζ ∈ (0, 1], since λ0 is the fundamental eigenvalue and for all other eigenvalues of Equation 2.1 λ0 < λ1< λ2 < ... stands. With the above form the natural

block iteration scheme to follow is

 ˆ Mζ0∗w(k+ 1 2) φ = Sφ∂ ˆN ∂φ 0 !∗ wT(k) + λ0ζFˆ0∗w(k) φ (2.21) ∂ ˆN ∂T 0 !∗ w(k+ 1 2) T = ST −  ˆ J0∗w(k+ 1 2) φ (2.22) w(k+1)φ = w(k)φ + rφ  w(k+ 1 2) φ − w (k) φ  w(k+1)T = w(k)T + rT  w(k+ 1 2) T − w (k) T  .

Here rφ and rT are chosen relaxation constants. Such schemes are convergent

with strong enough under-relaxation, regardless of the value of ζ, but are rather unstable and not very robust (see Section 2.5.3).

As in most practical situations the coupled adjoint operator is represented by a sparse matrix, Krylov algorithms (Saad, 2003) are a natural and promising choice for solving the adjoint system. For a generic linear system described as

Ax = b, Krylov methods search for the approximate solution xm in the Krylov subspace Km(A, r0) = span{r0, Ar0, A2r0, . . . , Am−1r0}, where r0 = b − Ax0

is the initial residual vector for some initial guess x0. This means that the

implementation of a Krylov solver around Equations 2.19-2.20 only requires the ability of multiplying with the coupled adjoint matrix, which takes the

Cytaty

Powiązane dokumenty

Przebiegające przez region lub jego miejscowości materialne szlaki turystyczne o znaczeniu regionalnym (za pierwsze dwa) (3). Trasa

Część pierwsza opiera się na sumiennie zebranym materjale archi­ walnym i korespondencyjnym, oraz na wzmiankach, któpe znaleźć mógł autor o rodzinie Przypkowskich u

Jeg o podział je st w ynikiem op eracji ad hoc, przeprow adzonej po to, aby szybko i zręcznie pogrążyć poetyckich przeciw ników.. Bo przecie nie o ab­ su rd an

In this case, texture analysis ought to be based on a soil sample collected at a defined temperature of the solution and in the precise time (Table 1), as well as of a

– розвивати екологічне виховання та навчання як частину розвитку особистості учнів, спрямоване, зокре- ма, на те, щоб

Szczególną uwagę zwrócił na zasady określające proces tworzenia się nowego porząd- ku oświatowego w Rosji: na decentralizację, usamodzielnienie placówek szkolnych,

Informacje zawarte w rachunku przepływów pieniężnych pozwalają na wyjaśnienie zmian następujących w stanie środków wykazanych w bilansie, gdyż rachunek zysków i

R eferent poruszył zagad­ nienia: „nowej kultury ekologicznej” jako wyzwania dla ekologii człowieka, globalizacji gospodarki i społeczeństwa jako podstaw o­