• Nie Znaleziono Wyników

Solving harmonic linear problems in unsteady turbomachinery flows using a preconditioned GMRES solver

N/A
N/A
Protected

Academic year: 2021

Share "Solving harmonic linear problems in unsteady turbomachinery flows using a preconditioned GMRES solver"

Copied!
12
0
0

Pełen tekst

(1)

c

TU Delft, The Netherlands, 2006

SOLVING HARMONIC LINEAR PROBLEMS IN

UNSTEADY TURBOMACHINERY FLOWS USING A

PRECONDITIONED GMRES SOLVER

Magnus L.O. Stridh∗, Lars-Erik Eriksson† ∗Chalmers University of Technology

Division of Fluid Dynamics, Department of Applied Mechanics 412 96 G¨oteborg, Sweden

e-mail: magnus.stridh@chalmers.se

web page: http://www.tfd.chalmers.se/∼strimag/ †Chalmers University of Technology

Division of Fluid Dynamics, Department of Applied Mechanics 412 96 G¨oteborg, Sweden

e-mail: lee@chalmers.se

Key words: Fluid Dynamics, GMRES, Linearized Navier-Stokes, harmonic, precondi-tioned

(2)

1 INTRODUCTION

The linear harmonic approach has been applied for solving problems involving for example blade vibrations or unsteady blade row interaction effects in multistage axial compressors and turbines. The flow is decomposed into a time averaged flow and an unsteady flow perturbation which are governed by the Reynolds Averaged Navier Stokes (RANS) and Linearized Navier-Stoke (LNS) equation respectively.

The perturbation are assumed to vary harmonically in time which transforms a time dependent problem into a steady one in frequency domain. Since the equations are linear we can solve for one frequency at a time and then add the different solutions to obtain the total perturbation.

The LNS equation in frequency domain are traditionally solved by a time stepping procedure by adding a pseudo time variable. The discretized LNS equations without or with added pseudo time can be written in compact form according to:

Ax= b (1)

˙

x+ Ax = b (2)

where the vector x, dim(N × 1), contains all the degrees of freedom (DOF) of the problem, the vector b, dim(N ×1), is due to the driving terms in the boundary conditions (inhomogeneous terms) and the matrix A, dim(N ×N ), represents all of the spatial differ-encing terms of the linearized flow solver. The dimension number N is, for the case of LNS, equal to 5 × (number of cells), since we seek the solution to ρ0, (ρu)0, (ρv)0, (ρw)0, (ρe

0)0.

The time stepping procedure has it’s limitations, which is connected to the spectrum of the matrix A. To reach convergence by the time stepping procedure the relation limt→∞ k e−At k= 0 needs to be fulfilled. Unfortunately violation of this relation often

appear, particularly in those cases where large separations appear in the average flow field, which are the most important cases to compute. Hence the need for a numerically more robust algorithm.

As far as the authors are aware this is one of the first applications of a really robust preconditioned Generalized Minimal RESidual (GMRES) algorithm to solve the LNS equations for turbomachinery applications.

GMRES by itself is a method where the current residual vector is used as a starting vector to build an orthogonal base which spans the corresponding Krylov subspace of given dimension m. The approximate correction to the current solution in this subspace is computed by minimizing the l2 norm of the residual of the system. The new residual

vector, after including the correction, then forms the next Krylov subspace and the next correction, etc.

(3)

reaching convergence via the GMRES method alone. For this reason we need to apply some intelligent preconditioning procedure on our linear set of equations (1) to transform the spectrum of matrix A in such a way that GMRES works efficiently.

2 GMRES

Generalized Minimum RESidual (GMRES) method was first proposed by Saad and Schults1, for solving large sparse non-Hermitian linear system. GMRES is a gradient

method where the residual form a orthogonal base spanning the Krylov subspace. The absence of symmetry of the matrix A demands that all previously computed orthogonal vectors need to be retained.

The method starts by defining a residual vector r0based on the the current approximate

solution vector x0 to the linear equation (1).

r0 = b − Ax0 (3)

Normalize the residual and define β and the first basis vector v1 (N × 1) as

v1 =

r0

β β =k r0 k2 (4)

Form the set of, m orthogonal basis vectors Vm (N × m), definition (5), via the

well-known Arnoldi process using v1 as the starting vector. The sequence for Arnoldi’s

method is described by algorithm (??).

Vm = [v1, v2, . . . , vm] (5) f or j = 1, 2, . . . , m (6) hi,j = vi∗ Avj, i = 1, 2, . . . , j ˆ vj+1 = Avj− j X i=1 hi,jvi hj+1,j =k ˆvj+1 k2 vj+1 = ˆvj+1/hj+1,j

The orthogonal projection of A onto the base vectors, hi,j is stored in a upper m × m

(4)

Note that in a practical implementation it is common to use a reorthogonalization proce-dure when constructing the basis vectors for numerical reasons, this has been performed in the work presented here2.

The basis vectors span the Krylov subspace Km of order m, which is defined as.

Km = span[r0, Ar0, . . . , Am−1r0] (7)

The eigenvalues of Hm are the approximate eigenvalues of A and the eigenvectors of

Hm define the corresponding approximate eigenvectors of A. The Hessenberg matrix

produced from the Arnoldi algorithm also satisfies Hm ≡ VmTAVm. From 6 we may also

deduce the following relationship:

AVm= VmHm+ hm+1,mvm+1 (8)

which can be written as

AVm = Vm+1Hm (9) where Hm =    [ Hm ] ... 0 . . . hm+1,m   

Let the approximate solution xm satisfy

xm = x0+ Vmy (10)

where vector y is determined so that the residual rm is minimal over Km. The residual

rm associated with xm then satisfies

rm = b − Axm = b − Ax0− AVmy (11)

= r0− Vm+1Hmy

= βv1− Vm+1Hmy

= Vm+1[βe1− Hmy]

where e1 is unit vector defined as e1 = [1, 0, . . . , 0]T. Minimization of the residual rm then

gives

min k rm k2= min k βe1− Hmyk2 (12)

which is minimal when ym solves the least-squares problem.

(5)

This gives an approximate solution to equation (1), as:

xm = x0+ Vmym (14)

The major computational features of the GMRES method are firstly the construction of the orthonormal basis via the Arnoldi method and secondly the solution of the least-squares problem, equation (13). One can also include some intelligent stopping criteria so that the iteration of the algorithm stops when the error is lower than the uncertainty on the data. Since the amount of work and storage required rises linearly with the iteration count m the cost rapidly becomes prohibitive. To overcome this limitation a restart after a chosen number of m can be performed where the intermediate result is used as new initial solution, and the accumulated data are cleared. To choose when to restart i.e choice of m, is a matter of experience.

Since the spectrum of matrix A is extremely rich we are required to compute a great number of basis vectors (Vm), to obtain the Krylov subspace needed for reaching

conver-gence via the GMRES method alone. For this reason we need to apply some intelligent preconditioning procedure on our linear set of equations (2) to transform the spectrum of matrix A in such a way that GMRES works efficiently

3 PRECONDITIONING

The idea behind the present preconditioning procedure is to lower the dimension of the Krylov subspace needed for good resolution of the critical part of A’s spectrum. This is obtained by transforming the spectrum of matrix A in a certain manner, and this can be done by the following steps. First consider the general solution to the ODE (2) with an added pseudo time variable, as

x(n+1)= e−T Ax(n)+ (I − e−T A)A−1b (15) Since we are interested in the steady solution the pseudo time derivative ˙x added needs to be zero when convergence is reach, hence at convergence x(n+1)= x(n) = x. The linear

set of equation can then be reformulated to (16). Note that T is the time between tn and

tn+1. (I − e−T A) | {z } C x= (I − e−T A)A−1b | {z } d Cx= d (16)

(6)

part of A’s spectrum on a Krylov subspace of surprisingly low dimension. In the following subsections, an explanation of how to determine matrix C and vector d, is given.

However, first note that an approximation to an expression like eAt can be obtained by

applying a third order Runge Kutta scheme to an equation on the form (17) ∂u

∂t = Au (17)

which has an analytical solution as, u = eAt.

Solving it approximately, using a commonly applied 3-stage Runge Kutta time algo-rithm, the following steps would be performed.

u∗ = un+ ∆tAun (18) u∗∗ = 1 2u n+1 2u ∗+ 1 2∆tAu ∗ un+1 = 1 2u n+1 2u ∗+ 1 2∆tAu ∗∗ = un+ ∆tAun+1 2∆t 2A2un+1 4∆t 3A3un

This means that, at each new times step we get an approximate solution to eA∆t as:

eA∆t ≈ [I + ∆tA + 1 2∆t 2A2+1 4∆t 3A3] = [P RK(∆tA)] (19)

If we perform a Taylor expansion of e∆tA, as

eA∆t = I + ∆tA + 1 2∆t

2A2 +1 6∆t

3A3 + HOT (20)

and compare with equation (19), it’s obvious that the 3-stage Runge Kutta scheme gives an 2:nd order accurate approximation in terms of a Taylor expansion.

This time stepping procedure needs to be performed N times to cover a wanted overall time interval T = N ∆t, resulting in an approximate solution to equation (17) on the form. eAT ≈ [I + ∆tA + 1 2∆t 2A2 +1 4∆t 3A3]N = [PRK(∆tA)]N (21) 3.1 C MATRIX

An approximation to the left hand side (LHS) of equation (16) is first described. Start-ing by solvStart-ing the homogeneous case of equation (2) i.e when b = 0, with a 3-stage Runge Kutta method, we obtain an approximate solution on the form

(7)

Subtract this from the initial solution to obtain

x(n) − x(n+1) = [I − PRK(−∆tA)]Nx(n) (23)

and we arrive at the wanted (LHS) of equation (16). In words, the (LHS) can be approxi-mated by applying N number of pseudo time steps ∆t using 3 stage Runge Kutta for the homogeneous case (b = 0) and taking the difference between the initial vector xnand the

updated vector xn+1.

3.2 d VECTOR

The right hand side (RHS) of the linear equation (16) can be approximated in a similar manner as the (LHS). Use the 3-stage Runge Kutta scheme and get an approximate solution to equation (2) with zero initial vector (xn = 0) but with the specified driving

vector b according to

xn+1 = [I − PRK(−∆tA)]NA−1b (24)

In words, the wanted (RHS) of equation (16) can be approximated by performing N pseudo time steps with 3-stage Runge Kutta scheme, using zero initial vector but with our driving boundary conditions in vector b.

In short the preconditioning involves solving two different problem sets: a) the homo-geneous case of equation (2) ( x(0) 6= 0, b = 0) using N pseudo time steps ∆t and b) the inhomogeneous case of equation (2) ( x(0) = 0, b 6= 0) using N pseudo time steps ∆t.

4 RESULTS

The LNS equations in frequency domain has been solved for a 3.5 stage (IGV, 3 rotors and 3 stators) transonic axial fan. As can be seen in figure 1, large separations of the average flow appears in the last stator passage, the figure is showing average flow vectors and entropy contours.

Due to the large separations in the last stator passage the traditional time stepping procedure fails to reach a converged solution in this passage, seen in figure 2. The residual history for this method, both linear and logarithmic scale is shown in figure 3.

In contrast to the time stepping approach , the preconditioned GMRES procedure converged without any problems and gave entirely satisfactory solutions. The residual history is here shown in figure 4 in the form of residual norm at each GMRES restart versus computational work. Machine round-off was after only 3 restarts in this case.

(8)

the number of vectors until around 40 vectors, after that increasing the number of vectors only increase the computational time without increasing the accuracy. Similar behavior is seen when we vary the number of pseudo time steps in the preconditioning procedure while holding the number of vectors constant at 40, figure 6. The efficiency of the method increase until 100 time steps, after that increasing the number of time steps only increase the computational time.

In figure 7 we compare converged solutions obtained with traditional time stepping to the corresponding converged solution using preconditioned GMRES, for a case where both methods converged. The figure show entropy contours of perturbation corresponding to the first order harmonic frequency. As can be seen the two solutions are virtually indistinguishable.

We have also computed a ”tough” case, only the preconditioned GMRES technique is able to reach converged solutions in all passages, entropy contours for this perturbation is presented in figure 8.

5 CONCLUSION

We have used this technique on a 3.5 stage compressor, where large separation occurs in the average flow field. The traditional time stepping procedure fails in general to reach to a converged solution while the GMRES technique with the previously described preconditioning is able to supply the desired solutions to the Linearized Navier-Stokes equations, in this application thus yielding the unsteady effects in the compressor.

Further more the evaluation of the method show that to minimize the computational effort, the number of Krylov vectors and the number of pseudo time iterations in the preconditioning, are important. For this compressor application the optimum settings were found to be 40 Krylov vectors and 100 number of time steps.

Comparison with converged solutions obtained with traditional time stepping to the corresponding solution using GMRES for a 1.5 stage fan, show little or no discrepancies between the converged solutions.

We have also computed a realitic multistage case, only the preconditioned GMRES technique is able to reach converged solutions in all passages.

This solution method can be used for solving other physical applications such as aero acoustics and vibrating blades which are governed by the linearized Euler or Navier-Stokes equations.

REFERENCES

[1] Y. Saad and M.H. Schultz. GMRES: A GENERALIZED MINIMAL RESIDUAL ALGORITHM FOR SOLVING NONSYMMETRIC SYSTEM, SIAM J. SCI. STAT. COMPUT., Vol. 7, I. 3, (1986).

(9)

Figure 1: Average flow, vectors and entropy contours, in the last stator passage, large recirculation area in the near hub region.

Figure 2: Entropy contours at midspan, of LNS solution using pseudo time-stepping solution technique.

0 500 1000 1500 2000 2500 3000 0 10 20 30 40 50 60 0 500 1000 1500 2000 2500 3000 10−1 100 101 102

(10)

0 0.5 1 1.5 2 2.5 x 104 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100

Figure 4: Residual history using the precondition GMRES solver for the last stator stage, each dot represents a restart of the GMRES solver, plotted against the computational work, measured in the number of pseudo time-steps performed.

0 0.5 1 1.5 2 2.5 x 104 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 20−100 30−100 40−100 50−100 70−100

(11)

0 0.5 1 1.5 2 2.5 x 104 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 40−50 40−70 40−100 40−120 40−150

Figure 6: Residual history using the precondition GMRES solver for the last stator stage, plotted against the computational work. Each color represents a certain number of time steps in the preconditioning procedure, vary between 50 and 150, while the number of Krylov vectors is constant at 40.

(12)

Cytaty

Powiązane dokumenty

Ich związek z parafią jest bardzo luźny (75n), choć do nich się także odnosi twierdzenie, że we wspólnocie parafialnej powstaje życie chrześcijańskie, w niej się umacnia i

W 1935 R. Dane o liczbie głosów oddanych przy wyborach do Sejmu w po­ szczególnych okręgach nie zostały ogłoszone. Natomiast ogłoszono liczbę głosów otrzymanych przez

Autorzy dokonują w niej analizy istoty generalnego aktu administracyjnego, a także przedstawiają propozycje wyodrębnienia jego rodzajów, opierając się na przeglądzie

We have applied the new iterative solver to the Marmousi model and have validated our numeric results by comparing them to the re- sults obtained using a direct solver.. We

Our numerical experiments demonstrate that the Interface-GMRESR method with reuse of the Krylov space generally converges faster than the customary subiteration method.. For

Idea wychowania narodowego w poglądach Wincentego Lutosławskiego // W: 62. Wincenty Lutosławski 1863–1954, Materiały z Posiedzenia Naukowego PAU w dniu 19. – Tłumaczenie z

W związku z deklaracjami Prezesa UOKiK, wpro- wadzeniem programu dla sygnalistów i wejściem w życie ustawy o roszczeniach związanych z na- ruszeniem prawa konkurencji

У контексті цього дослідження такий висновок дозволяє говорити про те, що, з огляду на відсутність у Законі України «Про авторське право