• Nie Znaleziono Wyników

Goal Adaptive Discretization of a One-Dimensional Boltzmann Equation

N/A
N/A
Protected

Academic year: 2021

Share "Goal Adaptive Discretization of a One-Dimensional Boltzmann Equation"

Copied!
168
0
0

Pełen tekst

(1)
(2)
(3)

One-Dimensional Boltzmann Equation

Proefschrift

ter verkrijging van de graad van doctor aan de Technische Universiteit Delft,

op gezag van de Rector Magnificus prof.ir. K.C.A.M. Luyben, voorzitter van het College voor Promoties,

in het openbaar te verdedigen op maandag 14 november 2011 om 10:00 uur

door

Wijnand HOITINGA ingenieur Luchtvaart en Ruimtevaart

(4)

Prof. dr. ir. E.H. van Brummelen

Samenstelling van de commissie: Rector Magnificus, Voorzitter

Prof. dr. ir. R. de Borst, Technische Universiteit Delft, promotor

Prof. dr. ir. E.H. van Brummelen, Technische Universiteit Eindhoven, promotor Prof. dr. M. Torrilhon, RWTH Aachen

Prof. dr. ir. C. Vuik, Technische Universiteit Delft Prof. dr. R.P. Stevenson, Universiteit van Amsterdam Prof. dr. ir. B. Koren, Universiteit Leiden/CWI Dr. ir. M.I. Gerritsma, Technische Universiteit Delft

Acknowledgement:

This research is performed within MicroNed, part of the BSIK research program of the Dutch government.

Copyright c 2011 by Wijnand Hoitinga

Printed in The Netherlands by Ipskamp Drukkers ISBN 978-94-6190-106-4

(5)
(6)
(7)
(8)
(9)

It is a great pleasure to present my thesis. The past years have been an instructive experience in which I have learned many new aspects regarding (mathematical) modeling, numerical methods and simulation. But these aspects are only the fundamentals of my research. The things one learns while working towards a Ph.D. degree are numerous. I have experienced the last years as a large playground in which I had time to explore and study many fields of interest. Of course, there is never sufficient time to study everything and although pursuing a career in academia would offer me more time, I have chosen to return to industry. This time, however, with a different mission, namely, to reduce the gap between industry and academia. In my opinion, many advanced approximation methods which improve the abil-ity to simulate realabil-ity, are not applied in industry because the distance between universities and software companies and subsequently to engineering companies which use the software tools is too large. This is not primarily because the newly developed approximation methods are too complicated, but rather because the software companies spend most of their resources to create a user-friendly envi-ronment. Such environment is of course also an essential aspect, but does not contribute to the quality of approximation (simulation).

To introduce the newest approximation methods I aim to act as an independent worker, which gives me the opportunity to work for industry and moreover the possibility to reserve time for research whenever I think this is important. Having contact with research groups, e.g., the outstanding research on numerical methods in the groups of Multiscale Engineering Fluid Dynamics and Numerical Methods in Engineering of prof. Van Brummelen and prof. De Borst, respectively, enables me to introduce the latest developments in industry.

(10)

To demonstrate its potential, we considered a one-dimensional prototype of the Boltzmann equation. I wish you a pleasant time reading my thesis and I hope you will enjoy.

Wijnand Hoitinga

(11)

1 Introduction 1

1.1 Flow modeling . . . 1

1.2 Goal adaptive finite-element method . . . 3

1.3 Scope and research objective . . . 4

1.4 Outline . . . 5

2 Fluid flow modeling – from microscale to macroscale 7 2.1 Knudsen number . . . 8

2.1.1 Collision frequency and mean free path . . . 8

2.1.2 Classification of flow regimes . . . 9

2.2 The microscale description – Molecular Dynamics . . . 11

2.2.1 Evolution of molecular state . . . 11

2.2.2 Molecular potentials . . . 13

2.2.3 Observables . . . 15

2.2.4 Non-deterministic molecular model . . . 16

2.3 The mesoscale description – The Boltzmann equation . . . 23

2.3.1 The Boltzmann equation . . . 23

2.3.2 Properties of the Boltzmann equation . . . 30

2.3.3 BGK and ES-BGK model . . . 34

2.4 The macroscale description – The hydrodynamic equations . . . . 36

2.4.1 One-particle probability-density function and macroscopic quantities . . . 36

2.4.2 Conservation of mass, momentum and energy . . . 40 2.4.3 Constitutive relations by a perturbation expansion method 42

(12)

3.1.1 Collision term for binary collisions in 1D . . . 47

3.1.2 Binary collision plane in 1D . . . 49

3.2 Molecular model . . . 51

3.2.1 Impulsive differential equations . . . 51

3.2.2 Boundary conditions . . . 52

3.2.3 Non-deterministic setting and equilibrium distributions . . 53

3.2.4 Random-collision process . . . 54

3.3 New collision term for the Boltzmann equation in 1D . . . 55

3.3.1 Integro-differential evolution equation . . . 57

3.3.2 Random-collision operator . . . 57

3.4 Properties of the 1D prototype Boltzmann equation . . . 61

3.4.1 Collision invariants . . . 61

3.4.2 H-theorem . . . 63

3.4.3 Conservation of mass and energy . . . 64

3.5 Auxiliary conditions . . . 65

3.5.1 Domain specification . . . 65

3.5.2 Boundary conditions . . . 66

3.6 Conclusions . . . 67

4 Numerical approximation of the one-dimensional Boltzmann equa-tion 69 4.1 Discontinuous Galerkin finite-element method . . . 69

4.2 Evaluation of the collision term . . . 71

4.3 Iterative solution procedure . . . 73

4.4 Numerical experiments and results . . . 75

4.4.1 Convergence of the Picard-iteration process . . . 76

4.4.2 Convergence of the DG finite-element approximation . . . 76

4.4.3 Convergence to equilibrium . . . 78

4.4.4 Comparison of the Boltzmann equation and molecular dy-namics . . . 79

4.4.5 Heat transfer . . . 82

4.5 Conclusions . . . 85

5 Goal-adaptive discretization of the steady 1D Boltzmann equation 87 5.1 A-posteriori error estimation . . . 87

5.1.1 Abstract problem . . . 87

5.1.2 Galerkin approximation . . . 88

5.1.3 Discretization errors in goal quantities . . . 89

5.1.4 Adjoint-based error estimates . . . 90

5.1.5 Computable error estimates . . . 91

5.1.6 Non-linear case . . . 92

5.2 Goal-adaptive mesh-refinement . . . 93

(13)

5.3 The adjoint Boltzmann model . . . 96

5.3.1 Preliminaries . . . 96

5.3.2 Linearized adjoint . . . 98

5.3.3 Dual loss term . . . 98

5.3.4 Dual gain term . . . 99

5.3.5 Adjoint model . . . 102

5.4 Numerical aspects . . . 103

5.4.1 Galerkin approximation . . . 103

5.4.2 Picard iteration . . . 105

5.4.3 Domain truncation . . . 106

5.4.4 Quadrature for the collision operator . . . 106

5.5 Numerical experiments . . . 108

5.5.1 Convergence of the DG finite-element approximation . . . 108

5.5.2 Uniform versus adaptive refinement for non-smooth dual solutions . . . 113

5.5.3 Incompatible boundary conditions . . . 118

5.6 Conclusions . . . 121

6 Conclusions and recommendations 125 A Numerical aspects of the molecular dynamics simulation 127 A.1 Random initial conditions for the molecular system . . . 127

A.2 Random-boundary conditions . . . 128

B Scaling of the residual reduction 129

C Degree-of-freedom analysis 131

Summary 143

Samenvatting 147

Acknowledgements 151

(14)
(15)

1

Introduction

1.1

Flow modeling

Fluid-flow problems in the transitional molecular/continuum regime play an im-portant role in many engineering applications, e.g., in aerospace engineering, in microfluidic devices and in many systems and processes in the high-tech industry. Furthermore, such problems are gaining prominence with the perpetual trend to-wards miniaturization in science and engineering. In many applications, interest is restricted to a particular macroscale quantity of interest, i.e., a quantity that is independent of the characteristics of individual molecules, but rather depends on certain ensemble properties. Examples of such macroscale quantities are thrust, heat transfer at fluid/solid interfaces and flow rate.

The numerical simulation of flows in the transitional molecular/continuum regime and the determination of macroscale quantities from such simulation poses a fundamental challenge. In principle, a variety of flow models are available to serve as a basis for numerical simulation. The appropriateness of these flow models de-pends on the regime in which a flow resides, where the flow regime is identified by a scaling parameter known as the Knudsen number (Kn). The Knudsen number expresses the ratio of the mean free path between molecules and a typical length scale of observation. In the high Knudsen number regime (say Kn > 1), the length scale of observation essentially pertains to the behavior of individual molecules. Accordingly, molecular-dynamics models, e.g., in the form of a Hamiltonian sys-tem, provide a meaningful description. In the low Knudsen number regime (say Kn < 0.1), the length scale of observation encapsulates large numbers of molecules, and the flow can be appropriately described by a continuum model, such as the Navier-Stokes and Euler equations. In the transitional regime, however, neither of the aforementioned models provides an adequate basis for numerical simulations. Continuum models are invalid in this regime or, at least, such models exhibit severe modeling errors on account of the meaningless closing relations that are essential to ensure well posedness of the continuum equations. On the other hand, the number

(16)

of molecules in the system is so large that it precludes a molecular-dynamics simula-tion. The largest molecular-dynamics simulation of today contains as few as O(109) molecules, which is less than the number of gas molecules under atmospheric con-ditions in a cubed container with a width of 10µm [73]. Moreover, in the context of macroscale quantities of interest, it can be argued that a molecular-dynamics simulation is unnecessarily complex, in that it yields redundant information; the simulation provides information on the precise state of each individual molecule, while the quantity of interest is independent of properties of individual molecules. The transitional molecular/continuum regime is covered by the Boltzmann equation. The Boltzmann equation specifies the evolution of a one-molecule probability-density distribution in position-momentum space [21, 43]. The Boltz-mann equation in fact encapsulates all conventional continuum models, in the sense that with appropriate scalings of the macroscopic length and time scales, limit solutions of the Boltzmann equation correspond to solutions of these contin-uum models [5,33,41,61,62]. Numerical approximation of the Boltzmann equation poses fundamental complications, however, on account of the high dimensional set-ting of the equation: for a problem with d spatial dimensions, the corresponding position-momentum domain of the Boltzmann equation is 2d-dimensional. For conventional discretization methods for (integro-)differential equations, such as finite-difference or finite-element methods with uniform meshes, the number of unknowns in the discrete approximation increases by a factor of 22d whenever the

computational mesh is refined by a factor of 2. Notwithstanding successful ap-plications to simplified problems (see, e.g., [1, 35]), this computational complexity is prohibitive for practical applications. Current simulation methods for 2 and 3-dimensional problems are generally based on Bird’s Direct-Simulation-Monte-Carlo (DSMC) method [10, 11] or its Nanbu extension [2, 66]. The computational com-plexity of DSMC methods depends sensitively on the Knudsen number, however, and the computational cost of the methods becomes prohibitive in the continuum limit.

A cogent alternative approximation method for the Boltzmann equation is pro-vided by finite-element approximations of so-called moment-closure hierarchies of the Boltzmann equation [42, 59, 60, 82]. In a moment-closure hierarchy, the Boltz-mann equation in 2d dimensions is represented by a hierarchy of hyperbolic systems in d dimensions. The number of equations in each hyperbolic system depends on its rank in the hierarchy. The hyperbolic systems can in turn be approximated by a finite-element method; see, for instance, [6, 58]. An important property of moment-closure hierarchies, is that the (Maxwell-Boltzmann) equilibrium distribu-tions that provide the connection between the molecular and continuum models of flows (cf., e.g., [21]) are included in the coarsest model in the hierarchy. Ac-cordingly, finite-element approximations of moment-closure hierarchies yield a very efficient approximation of the continuum limit.

In the context of restricted interest to macroscale quantities, it is anticipated that substantial computational savings can be achieved by means of a goal-oriented adaptive-refinement strategy [7, 40, 71]. In goal-adaptive methods, the computa-tional model is refined in such a manner that an optimal approximation of the

(17)

quantity of interest is obtained. Based on the solution of a dual problem, the contribution of local errors in the solution to the quantity of interest is identified, and only the regions that yield a significant contribution to the error in the goal quantity are refined. One can therefore envisage a computational strategy based on finite-element approximations of moment-closure hierarchies, in which the hier-archical rank in the finite-element model is not globally fixed, but is instead locally adapted in accordance with the goal-error indicator.

1.2

Goal adaptive finite-element method

Accurate approximate solutions by the finite-element method require a high-resolu-tion finite-element mesh, i.e., many elements and/or a high approximahigh-resolu-tion degree. The unfortunate scaling of the number of unknowns with dimensionality of a prob-lem, makes it practically impossible to approximate the Boltzmann equation with reasonable accuracy. To illustrate, a discrete approximation corresponding to a coarse finite-element mesh of only 32 unknowns per spatial dimension, would al-ready exceed 109 unknowns. Fortunately, one usually does not have an interest in

the general solution to a problem, but rather an interest in certain goal quantities, i.e., functionals of the solution. Accurate approximations of functionals does not necessarily require uniform resolution. That is, accurate goal-quantities do not necessarily require accurate approximate solutions, at least, not in a global sense. In general, the error contribution to goal-quantities is confined to certain regions in the domain. To reduce the error in goal-quantities effectively, the discretization error must be minimized in the regions that show a substantial error contribution to the goal-quantity. This can be done by employing adjoint-based mesh-refinement strategies. The Boltzmann equation seems particularly suitable for goal-adaptive finite-element methods, because essentially all quantities of interest are moments and, hence, functionals of solutions of the Boltzmann model.

In their pioneering work, Babuˇska and Rheinholdt based their mesh refinement strategies on the smoothness of the solution [3, 4]. Non-smooth solutions are more difficult to approximate and, hence, yield larger residuals. Mesh refinement in regions with large residuals improve the approximate solution and, consequently, reduce errors in goal-quantities. However, as mentioned before, large residuals do not necessarily mean a poor estimate of the goal-quantity. Therefore, resid-ual based methods do not generally lead to an optimal discretization. Later, in the 1990s, goal-oriented adaptive strategies have been developed. Goal adaptive strategies give an estimate of the error in a goal-quantity based on a duality pairing between the residual and a solution of the corresponding dual problem. For more details on goal-adaptive strategies we refer to [7,71,84] and the references therein. Goal-oriented adaptive finite element methods can be roughly described by the following sequence of steps. First, an approximate solution on a coarse finite-element mesh is computed. This solution is referred to as the primal solution.

(18)

Next, the (dual) solution of a corresponding adjoint or dual problem is computed. The adjoint problem is driven by the functional of interest. Hence, the adjoint solution contains information on the goal-quantity. In fact, the dual solution can be seen as a weight function indicating the contribution of local errors in the primal solution, where the accuracy of the primal solution is determined by the residual. Therefore, the dual-based adaptive finite-element method is also referred to as the dual weighted residual method [7]. This is corroborated by the definition of the error-estimate of the goal functional, which is given by a duality pairing of the residual of the primal problem and the dual solution. For instance, if the residual is large but the weight (dual solution) is zero, then the error-contribution to the goal-quantity is zero. Hence, a region with large residuals and a zero dual has no effect on the error in the quantity of interest. To obtain local data from the error-estimates, one has to decompose the error-estimate into element error-contributions. Local error-contributions yield refinement indicators that mark elements that need to be refined (or derefined). Elements that exhibit large error contributions to the goal-quantity are marked for refinement. After all marked elements are refined, the sequence of steps is repeated until the error in the goal-quantity is sufficiently small.

1.3

Scope and research objective

The complexity of equations that model flows in the transitional regime, requires accurate and efficient numerical methods, e.g., goal-adaptive finite element meth-ods. Presently, there are essentially no numerical methods based on variational principles that adequately solve transitional flow models, i.e., kinetic equations such as the Boltzmann equation. The aim of this research is to

develop and analyse goal-oriented adaptive finite-element methods for the Boltzmann equation.

Straightforward application of goal-adaptive finite-element methods to the Boltz-mann equation is rather ambitious for two reasons. Firstly, goal-adaptive strategies are relatively new numerical methods (from the end of the 1990s) and, hence, form an active field of research by itself. Secondly, the Boltzmann equation has prac-tically never been adequately discretized using variational-based methods, such as the finite-element method. Therefore, instead of considering the conventional Boltzmann equation, we shall restrict our research in this work to a prototype Boltz-mann equation in one spatial dimension. This prototype equation in subsequently used in developing a goal-adaptive finite-element discretization.

To place the work in this thesis into context, we remark that one-dimensional Boltzmann-type equations have been investigated previously. Notable examples are the Kac model [18, 53] and the Ianiro-Lebowitz model [20, 52]. For both the Kac and Ianiro-Lebowitz model, there exists an underlying molecular model in which molecules move along a string and undergo collisions. The molecular

(19)

interactions in the Kac model are based on a random-walk (the Kac-walk) and not on a mechanistic collision process. In the Ianiro-Lebowitz model, the randomness in the collision process pertains to the random selection between two distinct collision processes, viz., perfectly-elastic collision and velocity-sign-reversing collision. In this study, we focus on a single collision process which is inherently random on account of continuous dependence of the post-collision velocities on a random angle. This gives a close similarity to the Boltzmann equation in 2,3-spatial dimensions in interpretation of the collision mechanism. Moreover, the form of the collision operator in the new one-dimensional Boltzmann equation exhibits close similarities to the collision term of the conventional multi-dimensional Boltzmann equation. In particular, the collision term in the Boltzmann model assumes the form of an integral operator and it displays a quadratic nonlinearity in the distribution function. Based on the collision process, the 1D Boltzmann equation gives the evolution of a probability-density function associated with the molecular state of the molecules in the 1D molecular model.

The Boltzmann-type equation is approximated by means of a discontinuous Galerkin finite-element method. In the context of the DG finite-element method, it is to be remarked that the numerical investigation of Boltzmann-type equations by means of DG finite-element methods has received only sparse attention. We allude to the recent work of Gamba and co-workers on the Boltzmann-Poisson equation [27], on the Vlasov-Poisson system [46] and on the Wigner-Fokker-Planck equation [37]. Regarding goal-adaptive discretization of the Boltzmann equation, we stress that, to our knowledge, no previous attempts have been presented in the literature. This work serves as an exploration of goal-adaptive finite-element strategies for the Boltzmann equation, by employing goal-adaptive strategies on the one-dimensional prototype equation.

1.4

Outline

This dissertation is organized as follows. First, in Chapter 2, an overview of the different types of flow models is presented. Each model is associated with a specific Knudsen regime. Starting from the smallest scale, i.e., large Knudsen numbers, a sequence of models is described, each associated with a different scale regime, viz., the microscale, mesoscale and macroscale. Moreover, a relation is given be-tween the models on each scale. Subsequently, in Chapter 3, we focus on the mesoscale, and a low dimensional prototype Boltzmann equation is presented, i.e., we derive a type equation in one spatial dimension. This Boltzmann-type equation possesses most of the important properties of the original Boltzmann equation. The Boltzmann-type equation is subsequently used to develop efficient numerical methods. Such efficient methods can give a valuable insight on how to solve the original (high-dimensional) Boltzmann equation. In addition, Chapter 3 presents a molecular model in one spatial dimension that mimics collision dynamics in two and three spatial dimensions. The molecular model can be used to validate the newly derived prototype Boltzmann equation. In Chapter 4, we proceed with

(20)

approximating the Boltzmann-type equation using a Discontinuous Galerkin finite-element method. A number of test cases are presented that, on the one hand, validate that the 1D Boltzmann equation possesses most of the important proper-ties of the Boltzmann equation and, on the other hand, compares the Boltzmann solution to solutions of the molecular model. Furthermore, heat-transfer results are presented for both the molecular and the Boltzmann model. Chapters 3 and 4 are based on a published article entitled: A discontinuous Galerkin finite-element method for a 1D prototype of the Boltzmann equation [49]. Chapter 5 is concerned with the development of a goal-adaptive finite-element method for the 1D Boltz-mann model. To this end, the adjoint model associated with the BoltzBoltz-mann-type equation is derived. The adjoint problem is subsequently used to find a-posteriori error-estimators for predefined goal-functionals, i.e., quantities of interest. Chapter 5 concludes with numerical experiments that minimize the error in the quantities of interest by employing mesh-refinement strategies. It is shown that goal-adaptive finite-element methods minimize the error in predefined goal-quantities much more efficiently than conventional finite-element methods. Chapter 5 is based on pub-lished work in the proceedings of the 2009 ADMOS conference and the proceedings of the 27th International Symposium on Rarefied Gas Dynamics [48, 50]. Finally, Chapter 6 formulates the conclusions of this research.

(21)

2

Fluid flow modeling – from microscale to

macroscale

Any substance resides in a certain phase from which the most common phases are known as the solid phase, the liquid phase and the gaseous phase. A fluid is a substance that resides in either the liquid phase or gaseous phase, i.e., the term fluid refers to all liquids and gases. Fluids are composed of atoms, or groups of atoms known as molecules, that, unlike the atoms in solids, can move freely through space.

An observer examining a fluid on a molecular level (or microscale) will note a highly chaotic system of rapidly moving molecules. For instance, according to the equipartition theorem1, the average velocity ξ of a molecule is given by ξ =

p3kT /m, where k = 1.3806504 · 10−23 J/K is the Boltzmann constant, T the

temperature in Kelvin and m the molecular mass in kg. For a fluid composed of Argon atoms at a temperature of 273 K (≈ 0◦C), this gives average velocities of over 400 m/s!2 The particle density in the Argon fluid is approximately 1.18324 ·

1024atoms/m3. This astronomical large particle density leads to very small mean free paths, i.e., the average distance between two colliding (or interacting) Argon atoms, of approximately 63 · 10−9m [47]. Consequently, the collision frequency is in the order of a billion collisions per particle. Or, more inconceivable, every second there are about a nonillion (1030) collisions within a m3.

Fortunately, an observer examining a fluid at a level of every day life (or macroscale) does not witness any of the molecular chaos. Moreover, the inter-est we have at a macroscale does not depend on the microscale, in the sense that macroscale interests are ensemble properties of the molecular system. For instance, typical flow properties of interest are pressure, temperature and bulk velocity and not molecular velocity, molecular temperature and molecular pressure, where the

1First proposed in 1843 by John James Waterston.

2The molar mass of Argon is 39.948 gram. One mole contains 6.0221415 · 1023particles and,

(22)

meaning of the latter two is somewhat unclear on a molecular scale.

To determine the macroscopic flow properties in a simulation environment, we require a mathematical description or model of the flow. At the macroscale, these models, we refer to them as hydrodynamic models, are most efficient, because they express macroscopic properties of interest in other macroscopic properties. For ex-ample, the ideal gas law for one mole of gas is given by p = ρkTm , where ρ denotes the density of the fluid in kg/m3. More importantly, at the macroscale there exist relations between flux properties and macroscopic flow properties, e.g., relations that expresses the internal stresses and heat flux in terms of density, velocity and temperature. These relations, better known as constitutive relations, are of utmost importance in macroscale flow modeling, because they ensure that the hydrody-namic equations form a closed set. However, the constitutive relations rely on the underlying molecular system. As a consequence, if the molecular system rarefies, i.e., the particle density decreases, the constitutive relations become meaningless and, therefore, so do solutions of the hydrodynamic equations. Consequently, the hydrodynamic equations do not longer describe the flow properties well and other models need to be considered.

To model a flow we require different flow models for different molecular densi-ties. To determine which model is associated with a molecular density regime, we require a definition of scale, i.e., a parameter that indicates at what scale a flow resides. The two extremes of the scale level are, on one extreme, the molecular system that models a flow as a set of discrete molecules and, on the other extreme, the continuum system in which the detailed molecular system can be completely disregarded. This Chapter defines the scale parameter and the describes the mod-els, starting from the microscale and ending at the macroscale, giving the relations between the different types of flow models.

2.1

Knudsen number

2.1.1

Collision frequency and mean free path

Collisions between molecules form the basis of all flow properties. Essentially, molecules are the transport mechanism in a fluid and collisions influence that mech-anism. Therefore, how and how fast flow properties evolve depends on the number of molecular collisions that occur within a time span. If collisions are accountable for the flow properties, we conclude that the molecular (or number) density is an important quantity in the evolution of flow properties. If the molecular density for a given temperature is high, we have many collisions per time unit and when the molecular density is low we have relatively few collisions per unit of time.

The molecular density is closely related to the collision frequency. Of course, high densities lead to many collisions, at variance with low densities. Because, the number of collisions determines the evolution of flow properties, we define a relative collision frequency, i.e., the collision frequency divided by a reference collision frequency, as a parameter that defines the scale at which the flow resides.

(23)

Denoting by ν and ω the collision frequency and reference collision frequency, respectively, one can define the dimensionless scale parameter Kn = ω/ν known as the Knudsen number.

Another definition of the Knudsen number is related to the mean free path between molecules. The mean free path is the average distance that molecules travel in between collisions (how to define a collision is somewhat vague, because molecules generally move in a potential field and, therefore, collide either all the time or never; see Section 2.2.2 for more details on molecular potentials). The mean free path is inversely proportional to the collision frequency, since smaller mean free paths lead to a higher collision frequency. In a similar fashion, the reference collision frequency is related to the size of the (reference) system, characterized by a reference length scale or length scale of observation. Therefore, we non dimensionalize the mean free path by a reference length. Denoting by λ and L the mean free path and length scale of observation, respectively, the Knudsen number can alternatively be defined by Kn = λ/L.

2.1.2

Classification of flow regimes

The definition of the Knudsen number reveals an interesting observation. A low density flow around a large object, e.g., flow around a space shuttle at high alti-tudes, can be modeled with the same model as the flow inside a very small system, say, a microelectromechanical system (MEMS). Low-density flows have a large mean free path or small collision frequency. On the other hand, the flow inside a MEMS is characterized by a small length scale of observation or small reference collision frequency. Despite the disparity of the system, their Knudsen number can be the same.

Another classical example is that of a re-entry vehicle. Starting from space, the mean free path is large compared to the length scale of observation, typically some characteristic dimension of the space vehicle. As the vehicle returns to the earth surface, the air density increases and, hence, the mean free path becomes smaller. Of course, the length scale of observation is unchanged. Consequently, during the descent, the Knudsen number ranges from large to very small, making this a multiscale problem.

One generally distinguishes three flow regimes associated with the Knudsen number. These are: [20, 22, 43, 82]:

• Kn . 0.01: Macroscale regime: Continuum flows, hydrodynamic equations Euler/Navier-Stokes–Fourier;

• 0.01 . Kn . 10: Mesoscale regime: Transition regime, Boltzmann equation (BE);

• Kn & 10: Microscale regime: Free molecular flow, molecular dynamics (MD).

For small Knudsen numbers (the macroscale) a flow is considered to be a contin-uum and is modeled with the hydrodynamic equations and appropriate constitutive

(24)

Molecular Dynamics Navier-Stokes Boltzmann equation nano 10−9m 1 molecule micro 10−6m 109molecules meso 10−3m 1018molecules macro 1 m 1027molecules ∞ ← Kn Kn → 0

Figure 2.1: Distinction between flow regimes: On a microscale (Knudsen & 10) the flow is adequately modeled as a collection of discrete molecules. On the mesoscale (10−2 . Knudsen . 10) the flow best described by the Boltzmann equation. On the macroscale (Knudsen . 10−2) a flow is considered a continuum.

relations, i.e., a set of partial differential equations that govern the important flow properties density, velocity, and temperature; cf. Figure 2.1. We emphasize two important benefits of modeling at the macroscale. The first benefit is that hydrody-namic equations give relations between macroscale flow properties without referring to the underlying molecular micro structure. In many applications, the macroscale flow properties contain exactly the information that one is interested in. The sec-ond benefit is related to solving the hydrodynamic equations. Because the hydro-dynamic equations are partial differential equations, an established mathematical framework is available to solve these equations. Think of, e.g., the existence of well developed approximation methods, well-posedness proofs, posteriori and a-priori error estimation techniques, goal-oriented strategies, convergence properties, knowledge on residuals, etc. However, as we have already mentioned, if Knudsen becomes too large, the constitutive relations become meaningless and the hydro-dynamic equations fail to give an accurate description of a fluid.

In the transition regime between continuum and discrete molecules, or meso-scale, we distinguish two sub-regimes, notably, 0.01 . Kn . 0.1 and 0.1 . Kn . 10. Because a continuum description is most efficient, we prefer to extend the validity of the continuum models as much as possible. For Knudsen numbers between 0.01 and 0.1, it is common practice to reduce modeling errors by providing special types of boundary conditions, viz. imposing so-called velocity-slip and temperature jump boundary conditions. These type of boundary conditions lead to solutions that match experimentally observed flow properties.

If the Knudsen number exceeds 0.1, one needs to revert to the Boltzmann equa-tion. The Boltzmann equation is an integro partial-differential equation that is posed in a high-dimensional (state) space. Mainly because of the high-dimensional setting, the Boltzmann equation is complicated to solve and, therefore, many stud-ies are concerned with developing methods that aim to find solutions of the

(25)

Boltz-mann equation; see, e.g., [19, 20, 22, 23, 34, 59, 69, 82].

The free molecular flow regime, or microscale, is far from continuum. At the microscale, a flow is modeled as a system of discrete molecules that behave in accordance with Hamiltonian dynamics. The molecular model provides a complete specification of the state of each individual molecule. Therefore, the microscale description contains most details, compared to the meso- and macroscale models. Molecular models yield enormous amounts of information on a flow. If one is inter-ested in macroscale properties, it can be argued that a molecular model provides severely redundant information. On the other hand, it is generally prohibitively complicated to defive consistent meso- and macroscale models from a molecular model.

In the next sections we consider the micro-, meso- and macroscale flow models in further detail, and we derive the relations between the different models.

2.2

The microscale description –

Molecular Dynamics

In the free molecular flow regime a flow is modeled as a set of discrete molecules that behave in accordance with Hamiltonian dynamics; see e.g. [45]. This method is better known as molecular dynamics. From here on we will consider molecules that consist of one atom (monatomic molecules). The relation between the velocity and position of a molecules on one hand and macroscale flow properties (or ensemble properties) on the other is not unambiguous, because how one determines the average is somewhat arbitrary. For instance, the average velocity of a molecule depends on the time interval in which the velocity is measured. Longer time intervals lead to better approximations of the average. What time interval gives an adequate average and, more importantly, what is the error compared to an infinite time interval?

Although the molecular dynamics method is not an ideal method to determine macroscale flow properties, it provides insight in the dynamics of many-particle systems. Moreover, the molecular dynamics method is a starting point in under-standing the relation between the different models on the different Knudsen scales.

2.2.1

Evolution of molecular state

Equations of motion

To present a formal setting of the molecular dynamics method, we consider n molecules that are distributed in a space X ⊆ Rd, where d denotes the spatial

dimension. Each molecule is identified by a unique index i ∈ N := [1, . . . , n]. The position and momentum of molecule i is denoted by xi ∈ X and pi ∈ Rd,

respectively. The position-momentum pair γi := (xi, pi) represents the state of

(26)

is denoted by γ = (γ1, . . . , γn). Furthermore, we note that, by definition, the

momentum is given by pi= miξi, where mi and ξi denote the mass and velocity

of molecule i, respectively. We will restrict our considerations to mono-species systems and, hence, mi= m. The state of a molecule is an evolving quantity and,

therefore, γi= γi(t), t ∈ R+. Consequently, the system γ = γ(t) evolves as well.

Figure 2.2 depicts two arbitrary molecules with their position and velocity. To present the equations of motion that govern the evolution of γ, we consider the Hamiltonian defined by the total energy of the system, i.e., the sum of the potential and kinetic energy of the n molecules:

H := Ekin+ Epot= X i∈N p2 i 2m+ V (xi), (2.1)

where V is the potential energy function and p2 is equivalent to the magnitude

of the momentum |p|2. The Hamiltonian equations of motion are derived from a Lagrangian. We will not go into detail on the derivation, but instead refer to [72, §3.2]. The Hamiltonian equation of motion are given by:

˙ xi(t) = ∂H ∂pi , (2.2a) ˙ pi(t) = − ∂H ∂xi , (2.2b)

where the dot denotes the time derivative d(·)/dt.

The definition of the Hamiltonian (2.1) shows that the kinetic energy term is a function of momentum (or velocity) and the potential energy term is a function of position. Therefore, we can write Hamilton’s equations of motion is a more familiar form: ˙ xi= ξi and m ˙ξi= − ∂V ∂xi = ma, (2.3)

where we observe that the time derivative of position is velocity and time derivative of velocity is acceleration denoted by a = ¨x. The second identity in (2.3) is known as Newton’s second law, where the force is given by the negative position derivative of the potential. Generally, molecules carry a potential field V , which induce a force on all other molecules in the system.

Initial conditions

To complete the equations of motion, an initial state γ0 = γ1(0), . . . , γn(0)

 must be specified for all n molecules, i.e., we need to specify the position x and velocity ξ at t = 0. These conditions are referred to as initial conditions.

One may argue that the deterministic setting of the Molecular Dynamics de-scription requires a precise specification of initial states, because once the initial states are given the evolution of state and, hence, the evolution of any system prop-erty is predetermined. In addition to this argument, a molecular system is chaotic and, therefore, small perturbations on the initial state lead to a large deviations

(27)

˙ xj= ξj ˙ xi= ξi m ˙ξi m ˙ξj xi xj O

Figure 2.2: Representation of two molecules in motion: two molecules i and j with position xi, xj and velocity ξiand ξj, respectively. The force term mξ acting on

both molecules due to a potential field is identical in magnitude, but with opposite sign.

in the evolution of the system. Consequently, to simulate a molecular system we need to have initial states with an infinite accuracy. Of course, this is impossible and therefore, in practice, the initial conditions are generated randomly.

Boundary conditions

If the position domain is a proper subset of Rd, there exists a boundary ∂X of X

that encloses the physical space. Whenever the motion of a molecule is directed towards the boundary and the position of a molecule coincides with the boundary, i.e., x ∈ ∂X and ξ · κ < 0, where κ denotes the unit inward normal of ∂X, we must specify conditions to prevent the molecule from leaving X. These conditions are referred to as boundary conditions. Two examples of well known boundary conditions are specular reflection and periodic boundary conditions.

For a more detailed description of the molecular dynamics method and numerical aspects on how to solve the equations of motion, we refer to the books of Haile and Rapaport [45, 72], in particular, [72, §2.2.2] and [45, §2.9].

2.2.2

Molecular potentials

From Hamilton’s equations of motion we observe that molecular motion is dic-tated by Newton’s second law; resulting force on a molecule equals mass times acceleration. The resulting force is induced by a potential field that is formed by a contribution of all individual molecules in the system. Therefore, the evolution of molecular state is essentially determined by the characteristics of the potential. In the following we discuss several common interaction potentials.

(28)

The Lennard-Jones potential

Molecules induce a potential field, which typically contains an attractive part and a repulsive part. For two molecules i and j that are separated by a distance rij= |xi− xj|, the general expression for the induced potential is given by:

V (rij) = c1 (  c2 rij α − c2 rij β) , (2.4)

where c1 and c2 are constants related to molecule specific characteristics, and

α > 0 and β > 0 are coefficients associated with the repulsive and attractive part of the potential field, respectively. Typically, the coefficients in (2.4) are given by α = 12 and β = 6, which yields the Lennard-Jones potential. The attractive part in the Lennard-Jones potential is known as the Van der Waals interaction. The repulsive part is associated with quantum mechanics, and is known as the Pauli repulsion. The constant c1 governs the strength of the interaction and c2 is

a length scale associated with the transition point between attraction and repulsion In Molecular Dynamics, the potential appears as a spatial derivative, i.e., ∂V∂x. The spatial derivative of a potential yields a (negative) force. Therefore, we write the force corresponding to a potential by

Fij := − ∂V (rij) ∂rij drij dxi = c1 ( α c2 rij α − β c2 rij β) xi− xj rij . (2.5)

The expression in (2.5) gives the force on molecule i due to the presence of molecule j. Of course, in a system containing n molecules, the total resulting force on molecule i is given by the sum of n − 1 individual force contributions:

Fi= n X j=1, j6=i Fij, (2.6)

If in (2.4) we set β = 0, and let α → ∞ and c2 → 0, we obtain a potential that

displays a strong repulsive character if rij < c2. Molecules associated with such

potentials are referred to as hard-spheres, and can be conceived of as billiard balls in space. Another example of a frequently used potential are the inverse power potentials given by

Vι(rij) = V0r1−ιij , (2.7)

where V0 and ι are constants. Inverse power potentials contain repulsive forces

only. More details on power potentials and other type of molecular potentials can be found in, e.g., [19, 45, 72, 82].

Cut-off radius

From a computational point of view, long-range potentials, such as (2.4) or (2.7), form a complication, because computing the resulting force on all molecules is

(29)

Figure 2.3: The Lennard-Jones potential and its derivative.

an order n2 operation. Hence, even for relatively small systems of molecules,

computing the resulting force becomes prohibitive. To reduce the computational cost, it is common practice to introduce a so-called cut-off radius rc, such that the

potential vanishes for rij > rc. A cut-off radius is justified, because the potential

decays rapidly for increasing rij.

The Lennard-Jones potential with a cut-off radius is defined by

Vc(rij) = V (rij) − V (rc), rij ≤ rc = δc2, (2.8)

where δ > 1 is such that the potential and force are sufficiently small at rc. In

practice, one usually assumes δ = 5/2; see [45, pg. 191]. For r = 21/6c 2 the

resulting force is zero and, hence, this is the transition point between repulsive and attractive force. Figure 2.3 plots the Lennard-Jones potential, the shifted potential (Vc) and the resulting force. If the potential has a confined range of influence, the

evaluation of the resulting force on a molecule does not depend on the position of all n − 1 other molecules, but only those molecules for which rij ≤ rc.

2.2.3

Observables

Given the state of a molecular system at a certain moment, γ(t) = (γ1(t), . . . , γn(t)) ∈ (X × Ξ)n,

an observable of the molecular system can be expressed as Ψ(γ(t), t), with Ψ : (X × Ξ)n× (0, T ) → R a suitable function. Generally, one is interested in observables that are not explicitly time dependent and Ψ : (X × Ξ)n → R. We define the macroscale quantities as the restricted class of observables of the form:

Ψ : (γ1(t), . . . , γn(t)) 7→ 1 n n X i=1 ψ γi(t). (2.9)

One may note that (2.9) is independent of the molecular identity, in the sense that Ψ(P(γ)) = Ψ(γ) for all permutations P(γ) of the vector of states γ. For instance,

(30)

the macroscale quantity ψ = ξ represents the mean (or bulk) velocity v(t) of the molecules: v(t) = 1 n n X i=1 ξi(t). (2.10)

Another example of a macroscale quantity is the temperature T , which is ob-tained if ψ is defined by

ψ(γ) = nm dkξ

2, (2.11)

where k = 1.3806 · 10−23J K−1is the Boltzmann constant. Substitution of (2.11) into (2.9) shows that the temperature is proportional to the sum of kinetic energy of each molecule.

The macroscale quantities associated with ψ in (2.10) and (2.11) are global macroscale quantities. That is, the bulk velocity and temperature are based on all molecules in the system. Of course, from a continuum perspective, macroscale quantities are defined on points in position space, i.e., temperature and velocity are functions of x ∈ X. From (2.9) we observe that on a microscale we cannot evaluate ensemble properties on points, simply because of the fact that molecules do not share positions and, hence, at a point there is no such thing as an ensem-ble. Therefore, to introduce resolution we must always consider regions ω ⊂ X. Macroscale quantities with a spatial resolution are then defined by

ψω= χωψ(γ), (2.12) where χω= ( 1 γ ∈ ω, 0 otherwise,

the indicator function. For instance, the temperature within a region ω ⊂ X is given by T (ω) = 1 n n X i=1 χω nm dkξ 2 i.

2.2.4

Non-deterministic molecular model

The evolution of a molecular system and its properties, both on a molecular (mi-croscale) level and on a (ma(mi-croscale) system level, is determined by the initial sys-tem state. Generally exact initial molecular states are unknown, and the molecules are assigned random initial states according to a probability distribution. As the molecules in the molecular system evolve, so does the probability distribution. If we consider a number of initial system states, for which all system states are associated with the same probability distribution, each initial state gives a different evolution of the system. However, the evolution of the associated probability distribution will be the same. Therefore, we focus on the initial probability distribution and consider the molecular system provided with non-deterministic initial molecular states.

(31)

Probabilistic initial states

To introduce the probabilistic setting of random initial state generation we denote by Ω := X × Ξ the state space of an individual molecule. The system state γ containing the n initial states of the ensemble resides in phase space denoted by Ωn. Formally, a non-deterministic specification of the initial conditions consist of a probability-density function (pdf) %0 : Ωn → [0, ∞] that assigns to each subset

ω ⊂ Ωn the probability that the initial state γ := (γ

1, . . . , γn) resides in ω:

P(γ ∈ ω) = Z

ω

%0(γ) dγ. (2.13)

The probability distribution of a molecular system evolves under the Liouville equation; see, e.g., [30, 70]. This evolution can be described by a duality according to: Z Ωn Ψ(γ)%(t, γ) dγ = Z Ωn Ψ(Stγ)%0(γ) dγ, (2.14)

for all sufficiently smooth observables (or test functions) Ψ : Ωn → R, where

Stγ is the state of the system at time t corresponding to the initial condition γ.

Equation (2.14) expresses that % is constant along trajectories of the molecular system

The Liouville equation

To derive the Liouville equation, we start from the probability-density function %(γ, t) : Ωn→ [0, ∞]. The phase space Ωn contains all possible states γ and % is

associated with the probability of finding one arbitrary state. The Liouville equation follows from the stipulation that probability is conserved along trajectories in phase space. Therefore, we can write for the probability-density function

d

dt% γ1 x(t), ξ(t), . . . , γn x(t), ξ(t), t = 0. (2.15) We recall that the system state containing n molecules is denoted by γ = (γ1, . . . , γn)

and every molecular state by γi= (xi, ξi), where x ∈ Rd and ξ ∈ Rd. Therefore,

the differential form of dρ/ dt is given by: d dt% γ1 x(t), ξ(t), . . . , γn x(t), ξ(t), t = ∂% ∂t + n X i=1 ∂% ∂γi · d dtγi x(t), ξ(t)  =∂% ∂t + n X i=1 ∂% ∂xi dxi dt + n X i=1 ∂% ∂ξi dξi dt =∂% ∂t + n X i=1 ξi· ∂% ∂xi + n X i=1 Fi m · ∂% ξi = 0, (2.16)

(32)

where the last step in (2.16) follows from a substitution of Hamilton’s equations of motion (2.2). We note that every molecule has equal mass m. The final expression in (2.16) is known as Liouville’s equation.

Reduced equations and the BBGKY hierarchy

Liouville’s equation is set on a very high 2dn + 1 dimensional space. The equation serves only as a theoretical construct. We will not attempt to solve (2.16), but only derive a so-called reduced Liouville equation that contains less yet sufficient detail to describe the evolution of macroscale flow properties.

To facilitate the derivation of the reduced Liouville equation, we define a marginal distribution function f(r) by

f(r)(γ1, . . . , γr) :=

Z

Ωn−r

%(γ, t) dγr+1. . . dγn. (2.17)

The reduced Liouville equation then follows from integration of Liouville’s equa-tion (2.16) over a reduced (or subspace of) phase space:

Z

Ωn−r d%

dt dγr+1. . . dγn= 0. (2.18)

Expanding the differential in (2.18) into its three main members ∂t%, ξ · ∂x%

and m−1F · ∂ξ% in (2.16), where the latter two members consist of a summation

and ∂(·) denotes the partial derivative to (·), we obtain as a contribution of the

first member: Z Ωn−r ∂% ∂t dγr+1. . . dγn= ∂ ∂t Z Ωn−1 % dγr+1. . . dγn = ∂ ∂tf (r), (2.19)

where the first identity in (2.19) follows from interchanging the order of integration and differentiation. In a similar manner, the second term, ξ · ∂x%, in (2.16) yields

Z Ωn−r n X i=1 ξi· ∂% ∂xi dγ2. . . γn = r X i=1 ξr· ∂ ∂xr Z Ωn−r % dγr+1. . . dγn+ Z Ωn−r n X i=r+1 ξi· ∂% ∂xi dγr+1. . . dγn = r X i=1 ξi· ∂ ∂xi f(r)− Z Ξn−r I ∂Xn−r n X i=r+1 % ξi· κxdsr+1. . . dsndξr+1. . . dξn = r X i=1 ξi· ∂ ∂xi f(r), (2.20) where the first and second identity in (2.20) follow from a change in order of integra-tion and differentiaintegra-tion and applying integraintegra-tion by parts, respectively. Moreover,

(33)

in (2.20) we introduced the notation ∂X and κx indicating the boundary of X

and the outward unit normal on ∂X, respectively. The third identity in (2.20) shows that the boundary integral vanishes, which is justified as follows. Because no molecules can leave physical space, the probability of finding molecular (or sys-tem) states outside Ωn−r is zero, i.e., %(γ) = 0, γ /∈ Ωn−r. Alternatively, we

assume that ξ · κx≤ 0 for x ∈ ∂X and, hence, no molecular states exist outside

the physical domain. Essentially, this means that no molecules can move through the boundaries of the domain.

Finally, the third term, m−1F · ∂

ξ%, in (2.16) gives: Z Ωn−r n X i=1 Fi m · ∂% ∂ξi dγr+1. . . γn = r X i=1 ∂ ∂ξi · Z Ωn−r Fi m% dγr+1. . . dγn+ + Z Ωn−r n X i=r+1 ∂ ∂ξi · Fidγr+1. . . dγn = r X i=1 ∂ ∂ξi · Z Ωn−r Fi % mdγr+1. . . dγn+ − Z Xn−r I ∂Ξn−r n X i=r+1 % mFi· κξdsr+1. . . dsndxr+1. . . dxn = r X i=1 ∂ ∂ξi · Z Ωn−r Fi % mdγr+1. . . dγn, (2.21a) where, in a similar manner as in (2.20), we denote the boundary of velocity by ∂Ξ and the outward unit normal to ∂Ξ by κξ. In (2.21a) we repeated similar steps as

in (2.20). The integral over ∂Ξ vanishes, because the probability in velocity space decays for increasing molecular velocities, i.e., in the limit |ξ| → ∞ it holds that % → 0. Therefore, we set % = 0 on ∂Ξ and, hence, the surface integral vanishes.

The force, Fi, in (2.21a) depends on all molecular positions. Hence, we

cannot isolate it from the integral and subsequently write a reduced probability-density f(r). However, if we express the force term as a sum according to

Fi= n X j=1 j6=i Fij(γi, γj) = r X j=1 j6=i Fij+ n X j=r+1 Fij (1 ≤ i ≤ r),

(34)

and substitute into (2.21a), we can write a sequence of identities: r X i=1 ∂ ∂ξi · Z Ωn−r r X j=1 j6=i Fij % mdγr+1. . . dγn+ + r X i=1 ∂ ∂ξi Z Ωn−r r X j=r+1 Fij % mdγr+1. . . dγn = r X i=1 ∂ ∂ξi Fij m f (r)+ (n − r) r X i=1 ∂ ∂ξi Z Ω Fi,r+1 Z Ωn−r−1 % m Y k=r+1 k6=j dγk = r X i=1 ∂ ∂ξi Fij m f (r)+(n − r) m r X i=1 ∂ ∂ξi Z Ω Fi,r+1f(r+1)dγr+1. (2.21b)

where in the last step we used the fact that γk is essentially arbitrary which allows

us to change variables n−r−1 times to obtain n−r identical terms. To summarize, we collect the results of (2.19)-(2.21) and write the reduced Liouville equation:

∂ ∂tf (r)+ r X i=1 ξi· ∂ ∂xi f(r)+ r X i=1 ∂ ∂ξi Fij m f (r) =(r − n) m r X i=1 ∂ ∂ξi Z Ω Fi,r+1f(r+1)dγr+1. (2.22)

Equation (2.22) is a hierarchy in terms of the r-particle probability-density func-tion and constitute the so-called BBGKY-hierarchy after Bogoliubov [13], Born, Green [14], Kirkwood [54, 56] and Yvon. To solve (2.22) is as hard as to find solu-tions of the Liouville equation, because the r-particle probability-density function depends on the r + 1-particle probability density-function, which depends on the r + 2-particle probability-density function and so on and so forth. Several attempts have been made to find solutions of (2.22); see, e.g., [70, Ch. 4].

We conclude this section by presenting the reduced Liouville equation in its simplest form and set r = 1 in (2.22) to obtain

f0+ ξ · ∂ ∂xf = − (n − 1) m ∂ ∂ξ · Z Ω F (γ, γ∗)f (γ, γ∗) dγ∗, (2.23)

where (·)0 = ∂(·)/∂t and we omitted the indices that denote the order of the

probability-density function. Furthermore, we omitted the molecule index indicating molecule 1 and replaced the index indicating molecule 2 by a ∗. If we assume statistical independence, i.e. f (γ, γ∗) = f (γ)f (γ∗), Equation (2.23) becomes the

Vlasov equation which represents a model describing the short time behavior of a system of weakly interacting mass points. This is the case in, e.g., rarefied gases whose molecules interact with relatively weak, long range forces such as electrons of an ionized gas (Coulomb force) or the stars of a stellar system (gravitational force); see [19, pp. 57–59] for additional assumptions that apply to the Vlasov equation.

(35)

Truncated probability-density function

An alternative to reduce the Liouville equation is presented by Harold Grad in [42, 43], by integrating over a truncated domain Ωn−r

 = (X×Ξ)n−r, with X= X \B

and B= {B1× . . . × Br: Bi = |xi− xj| ≤ , j ∈ N, j 6= i} denotes a collection

of spheres with radius . Essentially, B denotes a subdomain that contains all molecular positions for which the location between the first r molecules and all other molecules is such that they are within a distance . If  denotes the diameter of the molecules and we assume that the probability-density % vanishes when two molecules physically overlap, there is essentially no difference compared to the analysis of the previous section. Hence, the truncated reduced-probability-density function is defined by

fr(γ1, . . . γr) =

Z

Ωn−r

%(γ1, . . . , γn) dγr+1. . . dγn, (2.24)

which is identical to (2.17) except for the fact that the integration domain in (2.17) does not include B. In what follows we will restrict ourselves to the case r = 1.

Similar as in the previous section, integrating the Liouville equation (2.16) over the truncated domain gives three terms. However, because the integration limits contain an xi dependence, it is not permitted to interchange integration and

differentiation, which complicates the matter at hand. Nevertheless, we integrate the Liouville equation over the truncated domain Ω and obtain:

Z Ωn−1 d% dt dγ2. . . dγn= ∂ ∂tf (1)  + n X i=1 Z Ωn−1 ∂ ∂xi · ξi% dγ2. . . dγn+ + n X i=1 Z Ωn−1 ∂ ∂ξi · Fi% dγ2. . . dγn = ∂ ∂tf (1)+ n X i=1 Z Ωn−1 ∂ ∂xi · ξi% dγ2. . . dγn+ + 1 m ∂ ∂ξ1 · Z Ωn−1 F1% dγ2. . . dγn= 0. (2.25) The first and third term in (2.25) give similar results as in (2.19) and (2.21a), re-spectively. The steps that led to the second term in (2.20) cannot be repeated here, because of the aforementioned x dependence in the integration limits. However, the second term in (2.25) can be written as

n X i=1 Z Ωn−1 ∂ ∂xi · ξi% dγ2. . . dγn = Z Ωn−1 ∂ ∂x1 · ξ1% dγ2. . . dγn+ n X i=2 Z Ωn−1 ∂ ∂xi · ξi% dγ2. . . dγn

(36)

= ∂ ∂x1 · ξ1f(1)+ n X i=2 Z Ξn−1 I Bn−1 ξ1· κx% ds2. . . dsndξ1. . . dξn+ − n X i=2 Z Ξn−1 I Bn−1 ξi· κx% ds2. . . dsndξ2. . . ξn = ξ1· ∂ ∂x1 f(1)+ (n − 1) Z Ξn−1 I B2 f(2)(ξ1− ξ2) · κxds2dξ2. (2.26)

where in (2.26) the third member of the second identity follows from Gauss’ the-orem and the first two members from the fact that

∂ ∂xi · Z |xi−xj|> h(xi, xj) dxj = Z |xi−xj|> ∂ ∂xi · h(xi, xj) dxj− I |xi−xj|= h(xi, xj) · κ ds.

To summarize, we present the reduced Liouville equation for a truncated probabil-ity-density function, which for r = 1 gives:

f0+ ξ · ∂ ∂xf = −(n − 1) Z Ξ I B f(2)(γ, γ∗)(ξ − ξ∗) · κxds∗dξ∗+ −n − 1 m ∂ ∂ξ · Z Ω F (γ, γ∗)f(2)(γ, γ∗) dγ∗, (2.27)

where in (2.27) we omitted the index 1 and  subscripts referring to molecule 1 and the truncated probability-density function, respectively. Furthermore, we replaced the index 2 referring to the second molecule by a ∗. Equation (2.27) is similar to Equation (2.23), except for the right hand side which now contains a term including a boundary integral. This term is associated with hard-sphere collisions and describes how molecules interact on contact, i.e., if |x − x∗| = .

Equation (2.27) contains information on molecule-molecule interaction. Un-fortunately, (2.27) is still a hierarchy, because the one-particle probability-density function depends on the two-particle probability-density function. Finding solutions to (2.27) is as hard as solving the Liouville equations. However, under certain as-sumptions, Equation (2.27) turns out to be a valuable equation that describes molecular dynamics in terms of the one-particle probability-density function. This equation is known as the Boltzmann equation and, once solutions of the Boltz-mann equation are available, all macroscale properties of the molecular system are known without resolving the complicated molecular dynamics system. Section 2.3 will focus on (2.27) and gives the relation between solutions of the Boltzmann equation and macroscale flow properties.

(37)

2.3

The mesoscale description –

The Boltzmann equation

The microscale flow model that we presented in Section 2.2 is a description of the flow that contains practically all details of the flow characteristics. Because we generally have a restricted interest and the initial state of the molecular system is in fact a random quantity, we argued that a more efficient description of the flow would be in terms of probability-density functions. This led to the equation of Liouville and, subsequently, to reduced equations or BBGKY hierarchies.

Essentially, Liouville’s equation and the BBGKY hierarchies contain even more information than the molecular dynamics model. Nevertheless, by introducing certain assumptions into the probabilistic models, it is possible to obtain a flow description that contains significantly less details of the molecular flow, yet still sufficient information for our flow quantities of interest. The probabilistic descrip-tion including the assumpdescrip-tions will be considered a mesoscale flow descripdescrip-tion and is topic of this section.

2.3.1

The Boltzmann equation

Equation (2.27) is still connected with the BBGKY-hierarchy in the sense that it contains both the one-particle marginal and the two-particle ma-ginal. Therefore, we do not have an equation that gives the evolution of f(1) without a relation

to more-particle probability-density functions. To derive the Boltzmann equation from (2.27) we must introduce closure relations.

Model assumptions for the Boltzmann equation

Equation (2.27) is closely related to the Boltzmann equation. However, the Boltz-mann equation is based on four assumptions. These model assumptions are:

A1 a truncated (or finite) potential field: molecules interact over a finite dis-tance;

A2 restriction to binary collisions: never will more than two molecules collide (or interact) at the same time;

A3 slow variations of f(1): it takes many collision to modify the probability-density function;

A4 molecular chaos: molecular states are uncorrelated.

The first assumption (A1) is familiar from the molecular description and is justified because of the rapidly decaying influence of the potential field surrounding a molecule. If F (x, x∗) = 0, for |x − x∗| > , the second integral term in the

right-hand side of Equation (2.27) vanishes, and we are left with f0+ ξ · ∂ ∂xf = −(n − 1) Z Ξ I B f(2)(γ, γ∗)(ξ − ξ∗) · κ ds∗dξ. (2.28)

(38)

Assumption A2 associated with binary collisions ensures that at the start of a collision, i.e., two molecules are a distance |x − x∗| =  apart and are approaching,

no third molecule begins to interact. We denote the state of molecule 1 and 2 at the start of a collision, t = t−, by (γ−, γ−). At the end of a binary collision, t = t+, i.e., the distance between two colliding molecules is |x − x∗| =  and they

are receding, we denote their state by (γ+, γ+).

The third assumption, A3, dictates that f (γ) is a sufficiently smooth function such that f (γ) does not vary within a distance . Essentially, this implies that it takes sufficiently many collisions to alter the shape of f(1). As a consequence, we can assume that the pre-collision positions x− and x− and post-collision po-sitions x+ and x+

∗ coincide. Therefore, we replace x−, x−∗, x+ and x+∗ by x.

Moreover, the collision time is small such that t− = t+= t. Essentially, the slow

variation assumption implies that the collision mechanism is condensed to a point in physical space and an instant in time.

The molecular chaos assumption, A4, is associated with the correlation between molecules. It is intuitively reasonable that the state of two arbitrary molecules on a collision path is uncorrelated and, hence, statistically independent. Therefore, the two-particle probability-density function associated with two colliding molecules prior to their collision can, by definition, be given by f(2)(γ, γ

∗) = f (γ)f (γ∗).

Although the molecular state of two molecules that just have collided may very well be correlated, the Boltzmann equation assumes otherwise. This is justified because any correlation between the molecular states rapidly vanishes on account of the chaotic nature of the molecular system. As a result, the two-particle density function is replaced by a multiplication of two one-particle probability-density functions. The molecular chaos assumption is an important assumption, because it leads to an integro partial-differential equation that gives the evolution of f without any relation to f(2). For further reading on the assumptions that lead to the Boltzmann equation see [43, §8], [42].

Distinction between approaching and receding molecules

From Equation (2.28) it is not obvious that there is a difference between ap-proaching molecules and receding molecules. Therefore, we introduce a change of variables in the boundary integral over the sphere B. Let the relative velocity be de-noted by V = ξ − ξ∗and introduce a diametral plane D = D−∪ D+perpendicular

to V , where D−and D+denote the two sides of D; see Figure 2.4. Furthermore,

we introduce polar coordinates (r, φ), where 0 ≤ r ≤  and 0 ≤ φ ≤ 2π on D. Then, because of the specific orientation of D with respect to V , the surface integral over B in (2.28) can be expressed as an integral over D according to

I B f(2)V · κ dsB = Z D+ f(2)(γ+, γ+) V dsD− Z D− f(2)(γ−, γ−) V dsD = Z  0 Z 2π 0 f(2)(γ+, γ+) − f(2)(γ−, γ−) V r dφ dr, (2.29)

(39)

where V is the absolute relative velocity (V = |V |) and γ− and γ+

∗ represent the

molecular state of the approaching and receding molecule, respectively. Moreover, the sphere B is divided into two hemispheres which leads to having two terms in (2.29); see Figure 2.4 for a schematic illustration of the diametral plane.

If we apply all assumptions from the previous section onto (2.28) and the change of variables to polar coordinates in the boundary integral, we obtain the Boltzmann equation: f0+ ξ · ∂ ∂xf − ε −1Q(f, f ) = 0, (2.30) where ε−1= n − 1, Q(f, f ) = Z f (ξ+)f (ξ+) − f (ξ−)f (ξ−) V dω dξ∗ (2.31)

and dω = r dφ dr. Furthermore, in Q we have omitted the position and time dependence. Equation (2.30) is an evolution equation for a one-particle probability-density function. The integral term Q(f, f ) is referred to as the collision term, because it describes the effect of collisions on the probability-density function. We note that the collision term only acts in velocity space and not in position space. The multiplication factor ε−1 in front of the collision term is associated with the

Knudsen number. The second term in (2.30) is referred to as a streaming term, which only acts in position space. Essentially, the probability density is transported through position space with the velocity ξ as transport coefficient.

In literature, one frequently finds the collision term separated into two con-stituents. We will adopt this notation and write for the collision term

Q(f, f ) = G(f, f ) − L(f, f ), (2.32)

where, G(f, f ) is associated with the molecular state of receding molecules and L(f, f ) is associated with molecular states of approaching molecules. The term G is associated with molecular states that are created due to molecules that have collided. Therefore, this term is referred to as the gain term:

G(f, f ) = Z

f (x, ξ+)f (x, ξ+) V dω dξ∗. (2.33)

Because pre-collision molecular states are destroyed during a collision, the second term in (2.32) is referred to as the loss term:

L(f, f ) = Z

f (x, ξ−)f (x, ξ−) V dω dξ∗. (2.34)

Essentially, the collision term in (2.32) maps a pre-collision molecular state to a post-collision molecular state. Therefore, intuitively, we loose one molecular state, viz. the pre-collision state, and gain another molecular state, viz. the post-collision state. We note that in this case the state can be seen as the molecular velocity only, because the collision term only acts in velocity space and disregards the molecular position.

(40)

D B V φ r  D±

Figure 2.4: Illustration of the diametral plane.

Collision vector and collision angle

The Boltzmann equation (2.30) is incomplete until a relation between the argu-ments in f , i.e., the pre-collision and post-collision velocities, and the integration parameter dω = r dφ dr is specified; see [43, II §15]. So far, the Boltzmann equa-tion (2.30) is derived under the assumpequa-tion of identical hard-sphere molecules, [22, §1.3]. The connection between molecular velocities and ω is related to a molecu-lar potential. To facilitate the extension of the collision term to model collisions generated by a potential field, we start by studying the binary collision process in more detail, and follow the analysis in [82, §3.1.2].

Collisions are momentum and energy conserving. Let us denote by ξ± and ξ± the pre (−) and post (+) collision velocities of two arbitrary colliding molecules. Then the conservation of momentum and energy in expressed by

ξ−+ ξ− = ξ++ ξ+ (momentum), (2.35a)

(ξ−)2+ (ξ−)2= (ξ+)2+ (ξ+)2 (energy), (2.35b)

where ξ = |ξ|. An interesting relation appears when we express the post-collision velocities in terms of the pre-collision velocities, i.e., we write ξ+= ξ−+ a and, consequently, to satisfy (2.35a) we have ξ−= ξ+− a. Substituting these relations into (2.35b) and recalling the relative velocity V±= ξ±−ξ±

∗, we find an expression

for a in terms of relative velocity: a2+ a · V− = 0 ⇒ a = − a

|a|· V

= −κ · Vand a = −κ(κ · V),

(2.36) where the unit vector κ is referred to as the collision vector ; see Figure 2.5 for an illustration of the collision vector. With a from (2.36) we express the post-collision velocities in terms of pre-collision velocities according to

ξ+= ξ−− κ(κ · V−) and ξ+

∗ = ξ−∗ + κ(κ · V−). (2.37)

Cytaty

Powiązane dokumenty

Di Blasio, Differentiability of spatially homogeneous solution of the Boltzmann equation in the non Maxwellian case, Comm.. Ehlers, Survey of general relativity theory,

In this note we give a short proof of Lemma 2 by another method, which yields a significantly better estimate, and we considerably improve the estimates of our Theorems 1 and

Thus, given an L-isomorphism class C of elliptic curves defined over K, the map α → F (α) defines an injection from the set of pairwise distinct K-isomorphism classes belonging to

Then all sufficiently large prime numbers p have the following property: if B is an elliptic curve over Q for which A[p] and B[p] are isomorphic representations of Gal(Q/Q), then A

Periodic solutions — ordinary iterations... Periodic solutions — approximate

We gave a condition sufficient in order that each solution of the equation vanish identically in the unit ball... Let us now consider the ^-dimensional

Suppose that they are

[r]