Evaluation of a multigrid method for
the reduction of the computational
time of SWAN
Report
Activity 3.1 of SBW-RVW Waddenzee
Evaluation of a multigrid method for
the reduction of the computational
time of SWAN
A. J. van der Westhuysen and G. Ph. van Vledder
Report
December 2007
Title:
Evaluation of a multigrid method for the reduction of the computational time of SWAN
Abstract:
The spectral wind wave model SWAN plays a key role in the estimation of the Hydraulic Boundary Conditions (HBC) for
the primary sea defences of the Netherlands. Since some uncertainty remains with respect to the reliability of SWAN for
application to the geographically complex area of the Wadden Sea, a number of activities have been initiated under project
H4918 ‘Uitvoering Plan van Aanpak SBW-RVW Waddenzee’ (Plan of Action on the Boundary Conditions for the
Wadden Sea) to devise a strategy for the improvement of the model. In this context, hindcast and sensitivity studies carried
out with SWAN for the Amelader Zeegat in the Wadden Sea have shown that significant computational times are required
to achieve results with the desired levels of numerical accuracy. This finding has led to a drive towards exploring ways to
reduce the computational time required by SWAN. The present study investigates the application of a multigrid method to
SWAN, which would improve the initial guess used in the iterative solution procedure. The aim of the present study is to
investigate the application of this multigrid method to stationary SWAN simulations of typical storm conditions in the
Wadden Sea. It is firstly aimed to assess whether this method leads to a reduction in computational time in these hindcasts.
A second, equally important aim is to determine whether the application of the multigrid method has any negative impacts
on model accuracy.
The application of a multigrid technique to SWAN was considered in two stages. During the first stage, multigrid
operation is enabled by sequentially running two separate SWAN simulations - the first on a coarse computational grid and
the second on the final, detailed grid resolution. The second stage, prompted by the positive results of the first stage, was to
incorporate the multigrid into the SWAN source code. It was found that a reduction in geographical space appears to be the
most promising candidate to use in the initial guess run. The evaluation of the present implementation revealed that
simulation time can be reduced by up to 23%, without a significant loss in accuracy (measured in terms of the convergence
error). It was also found that in some cases simulation times are not significantly reduced, but that the accuracy of the
model result was strongly improved.
References:
RIKZ contract RIKZ1797 (dated 9 March 2007)
SAP bestelnummer: 45 000 73 341
Ver
Author
Date
Remarks
Review
Approved by
1.0
A.J. v/d Westhuysen
G. Ph. van Vledder
November
2007 Preliminary
J. Groeneweg
M.R.A. v Gent
2.0
A.J. v/d Westhuysen
G. Ph. van Vledder
December
2007 Final
M. Zijlema
M.R.A. v Gent
Project number:
H4918.38
Keywords:
SWAN, SBW-RVW Waddenzee, Amelander Zeegat, computational speed-up, multigrid
Number of pages:
87 plus figures
Classification:
None
Contents
List of Tables ...iii
List of Figures ...iv
List of Symbols ...viii
1
Introduction ... 1–1
1.1
Background...1–1
1.2
Iteration behaviour ... 1–1
1.3
Aim of study ... 1–3
1.4
Approach ... 1–3
1.5
Project team ... 1–4
1.6
Report structure ... 1–4
2
General method ... 2–1
2.1
Application of multigrid methods to SWAN ... 2–1
2.2
Implementation strategy ... 2–2
2.3
Test conditions and model setup ... 2–3
2.3.1
Test conditions... 2–3
2.3.2
Discretization ... 2–3
2.3.3
Model physics ... 2–4
2.3.4
Convergence criteria ... 2–4
2.3.5
Boundaries ... 2–5
3
Viability of a multigrid approach... 3–1
3.1
Method ... 3–1
3.1.1
Test setup and coding conventions ... 3–2
3.2
Results ... 3–3
3.2.1
Evaluation criteria ... 3–4
3.2.2
Results for storm case WZ1 ... 3–6
3.2.3
Results for storm case WN1... 3–11
3.2.4
Results for storm case WZ2 ... 3–11
3.3
Recommendation ... 3–12
4
Implementation and verification ... 4–1
4.2.3
Timing... 4–10
4.2.4
Overall evaluations ... 4–11
5
Discussion...5–1
6
Conclusions ... 6–1
7
Recommendations... 7–1
8
References ...8–1
List of Tables
3.1
Run codes for the three Amelander Zeegat storm conditions.
3.2
Run codes for the various types of simulations conducted.
3.3
Run codes for the various multigrid settings tested.
3.4
Run codes for the various convergence settings in terms of required number of
converged points.
3.5
Summary of the number of iterations and convergence errors for the reference and
control run per multigrid option. Results for storm case WZ1.
3.6
Summary of the relative gains in computational speed (based on number of
iterations) and accuracy for the reference and control run per multigrid option.
Results for storm case WZ1.
3.7
Summary of the number of iterations and convergence errors for the reference and
control run per multigrid option. Results for storm cases WZ1, WN1 and WZ2.
3.8
Summary of the relative gains in computational speed (based on number of
iterations) and accuracy for the reference and control run per multigrid option.
Results for storm cases WZ1, WN1 and WZ2.
4.1
Cases considered in the verification of the multigrid implementation.
4.2
Summary of the total simulation time and accuracy for the reference and control run
of the multigrid implementation, applied to storm situations WZ1, WN1 and WZ2.
Convergence setting is I990C990.
List of Figures
2.1
Bottom topography in the Wadden Sea near the tidal inlet of Ameland. Location of
test points and area in which convergence errors are computed.
2.2
Current speed and direction for 9 February 2006, 11:00 hours
2.3
Variation of significant wave height H
m0and spectral period T
m-1,0for 9 February
2006, 11:00 hours.
2.4
Current speed and direction for 16 December 2005, 10:00 hours
2.5
Variation of significant wave height H
m0and spectral period T
m-1,0for 16 December
2005, 10:00 hours.
3.1
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 1. Multigrid grid settings: Rx=2, Ry=2, R =1, R =1 (x2y2d1s1), and
I990C990. Storm of 9 February 2006, 11:00 hours, with currents (WZ1).
3.2
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 2. Multigrid grid settings: Rx=2, Ry=2, R =1, R =1 (x2y2d1s1), and
I990C990. Storm of 9 February 2006, 11:00 hours, with currents (WZ1).
3.3
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 3. Multigrid grid settings: Rx=2, Ry=2, R =1, R =1 (x2y2d1s1), and
I990C990. Storm of 9 February 2006, 11:00 hours, with currents (WZ1).
3.4
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 1. Multigrid grid settings: Rx=1, Ry=1, R =2, R =2 (x1y1d2s2), and
I990C990. Storm of 9 February 2006, 11:00 hours, with currents (WZ1).
3.5
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 2. Multigrid grid settings: Rx=1, Ry=1, R =2, R =2 (x1y1d2s2), and
I990C990. Storm of 9 February 2006, 11:00 hours, with currents (WZ1).
3.6
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 3. Multigrid grid settings: Rx=1, Ry=1, R =2, R =2 (x1y1d2s2), and
I990C990. Storm of 9 February 2006, 11:00 hours, with currents (WZ1).
3.7
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 1. Multigrid grid settings: Rx=2, Ry=2, R =2, R =2 (x2y2d2s2), and
I990C990. Storm of 9 February 2006, 11:00 hours, with currents (WZ1).
3.8
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 2. Multigrid grid settings: Rx=2, Ry=2, R =2, R =2 (x2y2d2s2), and
I990C990. Storm of 9 February 2006, 11:00 hours, with currents (WZ1).
3.9
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 3. Multigrid grid settings: Rx=2, Ry=2, R =2, R =2 (x2y2d2s2), and
I990C990. Storm of 9 February 2006, 11:00 hours, with currents (WZ1).
Ry=2, R =1, R =1 (x2y2d1s1), and I990C990. Storm of 9 February 2006, 11:00
hours, with currents (WZ1).
3.11
Convergence errors of the multigrid method for the reference and control run for the
mean wave direction Dir and directional spreading Dspr. Multigrid settings: Rx=2,
Ry=2, R =1, R =1 (x2y2d1s1), and I990C990. Storm of 9 February 2006, 11:00
hours, with currents (WZ1).
3.12
Convergence errors of the multigrid method for the reference and control run for the
significant wave height H
m0and spectral period T
m-1,0. Multigrid settings: Rx=1,
Ry=1, R =2, R =2, (x1y1d2s2), and I990C990. Storm of 9 February 2006, 11:00
hours, with currents (WZ1).
3.13
Convergence errors of the multigrid method for the reference and control run for the
mean wave direction Dir and directional spreading Dspr. Multigrid settings: Rx=1,
Ry=1, R =2, R =2 (x1y1d2s2), and I990C990. Storm of 9 February 2006, 11:00
hours, with currents (WZ1).
3.14
Convergence errors of the multigrid method for the reference and control run for the
significant wave height H
m0and spectral period T
m-1,0. Multigrid settings: Rx=2,
Ry=2, R =2, R =2 (x2y2d2s2), and I990C990. Storm of 9 February 2006, 11:00
hours, with currents (WZ1).
3.15
Convergence errors of the multigrid method for the reference and control run for the
mean wave direction Dir and directional spreading Dspr. Multigrid settings: Rx=2,
Ry=2, R =2, R =2 (x2y2d2s2), and I990C990. Storm of 9 February 2006, 11:00
hours, with currents (WZ1).
3.16
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 1. Multigrid grid settings: Rx=2, Ry=2, R =2, R =2 (x2y2d2s2), and
I990C990. Storm of 9 February 2006, 11:00 hours, without currents (WN1).
3.17
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 2. Multigrid grid settings: Rx=2, Ry=2, R =2, R =2(x2y2d2s2), and
I990C990. Storm of 9 February 2006, 11:00 hours, without currents (WN1).
3.18
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 3. Multigrid grid settings: Rx=2, Ry=2, R =2, R =2 (x2y2d2s2), and
I990C990. Storm of 9 February 2006, 11:00 hours, without currents (WN1).
3.19
Convergence errors of the multigrid method for the reference and control run for the
significant wave height H
m0and spectral period T
m-1,0. Multigrid settings: Rx=2,
Ry=2, R =2, R =2 (x2y2d2s2), and I990C990. Storm of 9 February 2006, 11:00
hours, without currents (WN1).
3.20
Convergence errors of the multigrid method for the reference and control run for the
mean wave direction Dir and directional spreading Dspr. Multigrid settings: Rx=2,
Ry=2, R =2, R =2 (x2y2d2s2), and I990C990. Storm of 9 February 2006, 11:00
hours, without currents (WN1).
3.21
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 1. Multigrid grid settings: Rx=2, Ry=2, R =2, R =2 (x2y2d2s2), and
I990C990. Storm of 16 December 2005, 10:00 hours, with currents (WZ2).
3.23
Convergence behaviour of integral wave parameters in the Wadden Sea for test
point 3. Multigrid grid settings: Rx=2, Ry=2, R =2, R =2 (x2y2d2s2), and
I990C990. Storm of 16 December 2005, 11:00 hours, with currents (WZ2).
3.24
Convergence errors of the multigrid method for the reference and control run for the
significant wave height H
m0and spectral period T
m-1,0. Multigrid settings: Rx=2,
Ry=2, R =2, R =2 (x2y2d2s2), and I990C990. Storm of 16 December 2005, 10:00
hours, with currents (WZ2).
3.25
Convergence errors of the multigrid method for the reference and control run for the
mean wave direction Dir and directional spreading Dspr. Multigrid settings: Rx=2,
Ry=2, R =2, R =2 (x2y2d2s2), and I990C990. Storm of 16 December 2005, 10:00
hours, with currents (WZ2).
4.1
Flow chart of the main control loop for stationary simulation in the default version
of SWAN 40.51A
4.2
Flow chart of the main control loop for the multigrid implementation in SWAN, for
stationary simulation.
4.3
Comparison between the iteration behaviour of the multigrid implementation and
the ad-hoc model for case WZ1 for test point 1. Multigrid settings: Rx=2, Ry=2,
R =1, R =1 (x2y2d1s1), and I990C990.
4.4
Comparison between the iteration behaviour of the multigrid implementation and
the ad-hoc model for case WZ1 for test point 2. Multigrid settings: Rx=2, Ry=2,
R =1, R =1 (x2y2d1s1), and I990C990.
4.5
Comparison between the iteration behaviour of the multigrid implementation and
the ad-hoc model for case WZ1 for test point 3. Multigrid settings: Rx=2, Ry=2,
R =1, R =1 (x2y2d1s1), and I990C990.
4.6
Comparison between the iteration behaviour of the multigrid implementation and
the ad-hoc model for case WZ1 for test point 1. Multigrid settings: Rx=1, Ry=1,
R =2, R =2 (x1y1d2s2), and I990C990.
4.7
Comparison between the iteration behaviour of the multigrid implementation and
the ad-hoc model for case WZ1 for test point 2. Multigrid settings: Rx=1, Ry=1,
R =1, R =1 (x1y1d2s2), and I990C990.
4.8
Comparison between the iteration behaviour of the multigrid implementation and
the ad-hoc model for case WZ1 for test point 3. Multigrid settings: Rx=1, Ry=1,
R =2, R =2 (x1y1d2s2), and I990C990.
4.9
Comparison between the iteration behaviour of the multigrid implementation and
the ad-hoc model for case WZ1 for test point 1. Multigrid settings: Rx=2, Ry=2,
R =2, R =2 (x2y2d2s2), and I990C990.
4.10
Comparison between the iteration behaviour of the multigrid implementation and
the ad-hoc model for case WZ1 for test point 2. Multigrid settings: Rx=2, Ry=2,
R =2, R =2 (x2y2d2s2), and I990C990.
4.12
Convergence errors of the multigrid implementation for the control run and the
reference run. Wave height and period for case WZ1. Multigrid settings: Rx=2,
Ry=2, R =1, R =1 (x2y2d1s1), and I990C990.
4.13
Convergence errors of the multigrid implementation for the control run and the
reference run. Mean wave direction and spreading for case WZ1. Multigrid
settings: Rx=2, Ry=2, R =1, R =1 (x2y2d1s1), and I990C990.
4.14
Convergence errors of the multigrid implementation for the control run and the
reference run. Wave height and period for case WZ1. Multigrid settings: Rx=1,
Ry=1, R =2, R =2 (x1y1d2s2), and I990C990.
4.15
Convergence errors of the multigrid implementation for the control run and the
reference run. Mean wave direction and spreading for case WZ1. Multigrid
settings: Rx=1, Ry=1, R =2, R =2 (x1y1d2s2), and I990C990.
4.16
Convergence errors of the multigrid implementation for the control run and the
reference run. Wave height and period for case WZ1. Multigrid settings: Rx=2,
Ry=2, R =2, R =2 (x2y2d2s2), and I990C990.
4.17
Convergence errors of the multigrid implementation for the control run and the
reference run. Mean wave direction and spreading for case WZ1. Multigrid
settings: Rx=2, Ry=2, R =2, R =2 (x2y2d2s2), and I990C990.
4.18
Comparison between the computational times of the multigrid implementation and
the default model for case WZ1.
4.19
Comparison between the computational times of the multigrid implementation and
the default model for case WN1.
List of Symbols
Symbol Units
Description
BJ
-
Proportionality coefficient for surf breaking (APLHA in SWAN)
EB-
Proportionality coefficient for triad interaction (TRFAC in SWAN)
o
Wave direction
m
var
Convergence error at a given grid index m
BJ
-
Breaker parameter for surf breaking (GAMMA in SWAN)
var
Mean convergence error
rad/s
Instrinsic radian frequency
C
H-
Maximum allowable curvature in convergence criterium
C
JONm
2
s
-3Proportionality coefficient for bottom friction (CFJON in SWAN)
Dir
oTN
Mean wave direction
Dspr
oDirectional spreading
E
var
Relative gain in accuracy
f Hz
Wave frequency
H
m0m
Significant wave height
N
I, N
C-
Number of iterations for the initial and control runs respectively
N
MGC-
Equivalent number of iterations for the multigrid run
N
R-
Number of iterations for the reference run
NAP
m
Dutch national levelling datum
P
B,mvar
Integral parameters produced by benchmark run
P
C,mvar
Integral parameters produced by control run
P
R,mvar
Integral parameters produced by reference run
R
x, R
y-
Grid reduction factors in x,y space (GRX and GRY in SWAN)
R , R
-
Grid reduction factors in , space (GRS and GRD in SWAN)
T
m-1,0s
Mean absolute wave period
1
Introduction
1.1
Background
The spectral wind wave model SWAN (Booij et al. 1999) plays a key role in the estimation
of the Hydraulic Boundary Conditions (HBC) for the primary sea defences of the
Netherlands. Since some uncertainty remains with respect to the reliability of SWAN for
application to the geographically complex area of the Wadden Sea, a number of activities
have been initiated under project H4918 ‘Uitvoering Plan van Aanpak SBW-RVW
Waddenzee’ (Plan of Action on the Boundary Conditions for the Wadden Sea) to devise a
strategy for the improvement of the model. This activity is carried out in parallel with a
measurement campaign that is being undertaken in the Wadden Sea to assist in the
establishment of the boundary conditions (‘SBW-Veldmetingen’). In this context, hindcast
and sensitivity studies carried out with SWAN for the Amelander Zeegat in the Wadden Sea
(WL 2006; Royal Haskoning 2006; WL 2007b) have shown that significant computational
times are required (for the latter study, approximately 2.5 hours on a 3.4 GHz Pentium
processor with 1.0 GB RAM) to achieve results with the desired levels of numerical
accuracy. The computation of the complete HBC with SWAN, which includes a great
number of environmental conditions and a model domain of the entire Wadden Sea, would
therefore result in a substantial total computational time. This finding has led to a drive
towards exploring ways to reduce the computational time required by SWAN. In calculating
the HBC, simulation times can be reduced either by employing parallel computing and
high-performance processors in combination with the standard model code, or by altering the
computational algorithm of the model itself (or a combination of the two). The current
project explores the avenue of adapting the model code, in which two methods for the
reduction of computational time are investigated. In the first part of this project, described in
WL (2007a), the deactivation of converged grid points during the iteration process was
considered. In the second part of this project, described in the present report, the application
of multigrid methods is investigated, which leads to the improvement of the initial guess
used in the iterative solution.
1.2
Iteration behaviour
The final solution of the action balance equation is not found after just one set of four
sweeps, however, but needs to be repeated during a number of iterations of these sweeps
(henceforth referred to simply as iterations). This is for a number of reasons. Firstly,
iteration is required because of the linearization of the source terms. Secondly, action
density can be transferred from one directional quadrant (sweep direction) to a neighbouring
quadrant by the processes of refraction and quadruplet nonlinear interaction. This would
require the sweep for the neighbouring quadrant to be repeated during a subsequent
iteration. In addition, in order to stabilise the source term integration, SWAN makes use of
an action density limiter (Hersbach and Janssen 1999) that limits the amount of change in
action density during each iteration. After each set of four sweeps, the total change in each
spectral bin is truncated to a certain percentage (default 10 %) of the Phillips equilibrium
spectrum. This implies that the actual change in action density prescribed by the physics
may not be realised after only one Gauss-Seidel solution procedure of four sweeps.
Studies have shown that the influence of the action density limiter is the primary reason for
requiring multiple iterations (e.g. Zijlema and Van der Westhuysen 2005, Fraza 1998). In
non-stationary simulations, the change in action density per time step prescribed by the
model physics tends to be of the same order as the amount allowed by the action limiter, so
that three iterations per time step appears to be sufficient (Fraza 1998). Stationary
simulations, on the other hand, typically require many more iterations before convergence is
reached. During the stationary solution procedure, the time step is infinite, so that the
change in action density during a single iteration can be far greater than the amount of
change allowed by the action limiter. To alleviate this problem, stationary SWAN
simulations are initialised with a so-called first guess of the final solution, so that the
amount of change required to reach the converged solution is reduced. Yet a number of
studies have shown that in stationary mode SWAN still often requires more than 30
iterations to reach full convergence (e.g. Zijlema and Van der Westhuysen (2005); Van der
Westhuysen et al. 2005; Alkyon 2007). This relatively slow convergence can be seen in
wave parameters such as the significant wave height, period measures and the mean wave
direction. Since the computational time per iteration can be significant for detailed
simulations, the need for such a great number of iterations can require substantial total
computational time. For the application of SWAN to the Wadden Sea to derive the HBC,
interest is primarily in the stationary mode of simulation, hence the remainder of this study
will be limited to this mode of operation.
numerical techniques called multigrid methods (Ferziger and Peri 2002). Since
computation on the reduced grid is faster than on the original grid, and since the good initial
guess from the coarse run typically reduces the required number of iterations on the original
detailed grid, the combined time required for the two simulations are typically less than
when iterating on the detailed grid only.
Van Vledder (2005) and Alkyon (2005) have demonstrated that such an approach is a
promising candidate for reducing the simulation time of SWAN. The action density, the
unknown variable to be solved, is defined in four dimensions – two in geographical space
and two in spectral space (in stationary simulations the dimension of time is neglected).
Application of a multigrid method entails the execution of a coarse simulation in which the
computational grid is reduced with respect to any of these four dimensions. Therefore, a
coarse run, starting with a second-generation first guess, would be performed, which would
provide an initial estimate of the final solution. This estimate, which represents the complete
set of user-defined model physics, is then used as the starting point in the iteration process
on the detailed grid. The application of the FMG method to stationary SWAN simulation can
therefore also be interpreted as the replacement of the second-generation first guess with an
initial estimate of the final solution using the actual third-generation physics applied in the
SWAN simulation.
1.3
Aim of study
The aim of the present study is to investigate the application of this multigrid method to
stationary SWAN simulations of typical storm conditions in the Wadden Sea. It is firstly
aimed to assess whether this method leads to a reduction in computational time in these
hindcasts. A second, equally important aim is to determine whether the application of the
multigrid method does not decrease the model accuracy.
1.4
Approach
this implementation, the definitive assessment of the performance of the multigrid method
was made.
1.5
Project team
This study was carried out by André van der Westhuysen and Gerbrant van Vledder
(Alkyon). The internal quality assurance and review was carried out by Jacco Groeneweg,
and the external review was done by Marcel Zijlema (Delft University of Technology).
1.6
Report structure
2
General method
This section describes the general method of introducing a multigrid technique into SWAN
that has been followed in the present study. It considers the application possibilities of
multigrid methods to SWAN (Section 2.1), describes the strategy for implementation of a
multigrid method in SWAN (Section 2.2) and the test cases considered to test this
implementation (Section 2.3).
2.1
Application of multigrid methods to SWAN
The numerical model SWAN simulates wind wave fields in terms of the action density N by
solving the so-called action balance equation:
tot g
c N
c N
S
N
c
U N
t
(2.1)
The first term of (2.1) is the time derivative of the action density, the second term denotes
the propagation of wave action in two-dimensional geographical space (x, y), with
c
gthe
group velocity and
U
the ambient current velocity. The second term represents the effect of
shifting of the intrinsic radian frequency
= 2 f (where f is the intrinsic frequency) due to
variations in depth and mean currents. The third term represents depth-induced and
current-induced refraction. The quantities c and c are the propagation velocities in spectral space
( , ), in which is the wave propagation direction. The right-hand side contains the source
term S
totthat represents all physical processes that generate, dissipate or redistribute wave
energy.
As outlined in Section 1, the principle behind the FMG method is to conduct an initial
simulation using a computational grid that is coarser than the computational grid on which
the final solution is required. Using this coarse grid, a fast estimate of the solution is
obtained. This result is then interpolated onto the final grid resolution, where it serves as the
initial guess for the simulation on the fine grid. Considering the description of the action
balance equation (2.1) given above, the computational grid resolution of SWAN can
potentially be reduced both in its two geographical dimensions and in its two spectral
dimensions. Of the two spaces, reducing the resolution of the geographical domain appears
to be the lesser intrusive. This is because this choice does not distort the local spectral
balance at a particular geographical grid point – it has been shown that the source term for
quadruplet interaction is sensitive to a departure of the discretisation from f/f = 0.1, and
that an overly coarse directional discretisation can lead to a strong manifestation of the
so-called garden sprinkler effect (Van Vledder et al. 2000; Booij and Holthuijsen, 1987). On the
other hand, the total computational effort is significantly reduced by using a coarser spectral
resolution. Therefore, in the present study, the possibility of reducing the computational grid
in all four dimensions is considered.
2.2
Implementation strategy
The implementation of the multigrid technique was considered in two stages. During the
first stage, multigrid operation is enabled by sequentially running two separate SWAN
simulations - the first on a coarse computational grid and the second on the final, detailed
grid resolution. The initial coarse grid run outputs the wave field state at the end of the
simulation to a so-called hotfile. A post-processing program outside of SWAN reads the
contents of this hotfile, and interpolates these results to the final detailed resolution. This
program consists of one module to account for interpolation in geographical space, and
another for interpolation in spectral space. The latter module takes care for periodicity of
directions. These results are used to initialise the subsequent simulation on the detailed
computational grid.
This ad-hoc method, which does not require any alteration to the model code, is intended to
assess the viability of applying the multigrid concept to SWAN. During this viability study,
a number of options for the reduction of the computational grid, both in geographical and
spectral space, were tested. This was done in order to identify the optimal grid reduction to
produce the greatest decrease in total simulation times. In addition, the settings for the
convergence criteria used in the initial and detailed model runs were considered. For
example, the accuracy of the initial guess may be improved by carrying out a relatively large
number of iterations on the computationally cheap coarser grid, and hence to use stricter
convergence criteria here.
2.3
Test conditions and model setup
The application of a multigrid methods in SWAN was investigated for two field cases in the
Amelander Zeegat in the Dutch Wadden Sea that feature a variety of physical processes and
a complex bathymetry including barrier islands, an ebb tidal delta, tidal channels and shoals
(Figure 2.1). This figure also contains the locations of the three test points used for the
evaluation of the convergence behaviour of SWAN and a selected area used for the
determination of the accuracy of the solutions. The three tests points were selected on the
basis of regions were poor model accuracy was found. The rectangular area for evaluating
was chosen over the tidal inlet, since this is the main area of interest in the present study.
Details of the selected field conditions and of the general model setup for SWAN, used
throughout this study, are given below. An example of a SWAN input file used in this study
is given in Appendix A.
2.3.1 Test conditions
Two field cases observed in the Amelander Zeegat are selected for the application of the
multigrid method. These two field cases are the same as those considered in the first phase
of this project (WL 2007a). The first field case, taken on 9 February 2006 at 11:00, features
an offshore wave condition of H
m0= 5.0 m and T
m-1,0= 10.0 s from NW (observed at buoys
AZB11 and AZB12 located just offshore of the tidal inlet), with a wind of U
10= 19.5 m/s,
also from NW. Figure 2.2 shows the current field for this simulation time, computed by the
WAQUA flow model. At the time of the observations it was ebb tide, with a maximum
computed current in the main tidal channel of about 0.7 m/s, and a weaker current of about
0.2 m/s over the tidal flats. Based on tidal observations along the coasts of Terschelling
(Station TERS) and Ameland (Station NES), a spatially uniform water level of +0.5 m NAP
is set over the model domain. Simulation results of the variation of the significant wave
height H
m0and spectral period T
m-1,0, produced using the model setup described below, are
shown in Figure 2.3. The mean wave direction is indicated by arrows, which are scaled with
the significant wave height.
The second field case, recorded on 16 December 2005 at 10:00, features an offshore wave
condition of H
m0= 5.4 m and T
m-1,0= 9.5 s from NW, with a wind of U
10= 17.5 m/s from
NNW. At the time of the observations it was high tide, and the WAQUA flow model results
show a flood current in the main tidal channel (Figure 2.4), which had a magnitude of about
0.6 m/s. Based on tidal observations at stations TERS and NES, a spatially uniform water
level of +2.0 m NAP is set over the model domain. The variation of the simulated significant
wave height H
m0and spectral period T
m-1,0is shown in Figure 2.5. The mean wave direction
is again indicated by arrows, scaled with the significant wave height.
2.3.2 Discretization
Royal Haskoning (2006). A rectangular
1computational grid was used in the geographical
domain, which has a grid spacing of x = y = 100 m. In the frequency domain, a
directional discretization of
= 10
oand a geometric frequency distribution of f/f = 0.1
were used, with a frequency range of 0.03-1.0 Hz. These discretizations correspond to the
overall computational grid used in the hindcast study of Royal Haskoning (2006), and
represents a typical optimum choice between numerical accuracy on the one hand, and
computational effort on the other.
2.3.3 Model physics
The computations were performed in third-generation mode, using the SWAN model
version 40.51A. For wind-wave generation, the setting WESTH was used, which features
the combination of wind input and saturation-based whitecapping proposed by Van der
Westhuysen et al. (2007). Quadruplet interactions are modelled used the Discrete Interaction
Approximation of Hasselmann et al. (1985). Wind fields were modelled as spatially
uniform. The shallow source terms include triad interaction according to Eldeberky (1996)
using
EB= 0.05 and CUTFR = 2.2, surf breaking according to Battjes and Janssen (1978)
using
BJ= 1 and
BJ= 0.73. Bottom friction is modelled according to the JONSWAP
formulation with C
JON= 0.067 (Hasselmann et al. 1973). These settings are activated by the
following user commands:
BREAKING
1.
0.73
FRICTION
JONSWAP
0.067
TRIAD
0.05
2.2
GEN3
WESTH
Apart from the parameter choice for the triad interaction term, these settings agree with
those used in the hindcast studies of WL (2006), Royal Haskoning (2006) and WL (2007b).
2.3.4 Convergence criteria
The convergence criteria selected for this study is the curvature criterion proposed by
Zijlema and Van der Westhuysen (2005), applied with a maximum curvature of C
H= 0.001.
This option is activated with the following command:
NUM STOPC 0.000 0.010 0.001 [PERC] STAT mxitst=50 alfa = 0.0
In the investigations presented in Sections 3 and 4, the strictness of the convergence
criterion was varied in terms of the required percentage of converged points (see
Section 3.1.1), hence this field remained variable. Under-relaxation was not applied.
2.3.5 Boundaries
3
Viability of a multigrid approach
Before a full implementation of the multigrid method in SWAN was made, the concept of
applying this method was tested in a viability study, which is presented below. In
Section 3.1, the method of analysis is described, followed by a presentation of the results
(Section 3.2). Section 3.3 presents a summary of the results of this investigation, including
the recommendation to implement this method into SWAN.
3.1
Method
In this viability study, the application of a multigrid method to SWAN was tested by
sequentially running two default SWAN simulations - first on a coarse grid and second on
the final, detailed grid. Using these two sequential runs, the effectiveness of applying a
multigrid method to SWAN was tested by performing a systematic analysis of the effect of
different reductions in geographical and spectral grid resolution for the initial guess on the
convergence behaviour and accuracy of the combined (initial plus detailed) simulation. This
analysis was carried out for the field cases presented in Section 2.3.1. The two field
conditions considered both feature current fields, which are included in the simulations. To
assess the influence of currents on the effectiveness of the multigrid method, the first field
condition was also investigated with its current field deactivated. Table 3.1 summarizes the
main features of these cases.
For each field case, four types of simulations were carried out. Firstly, a benchmark run was
carried out using 50 iterations to obtain an estimate of the converged solution. Secondly, a
reference run was conducted, using selected convergence criteria, to determine the so-called
convergence error of the default model (the difference between its solution at the end of the
iteration process and the benchmark solution). Thirdly, for the multigrid method, a series of
initial guess simulations were carried out using a reduced grid resolution in geographical
and/or spectral dimensions. The results of these runs (stored in hotfiles) were interpolated to
the detailed grid resolution. The converted hotfiles were used as initial condition for the
fourth and final type, namely a control run. The control runs use the same convergence
criteria as the reference runs to assess the effect of applying the initial guess. Table 3.2
below summarizes these four types of simulations.
The effectiveness of the multigrid approach was evaluated in a number of ways. Firstly, the
iteration behaviour of a number of integral parameters is studied to assess how the multigrid
method influences the iteration behaviour. Secondly, it was investigated whether the
application of the multigrid method reduces the number of iterations of the control run, and
hence the overall simulation time. Thirdly, the accuracy of the result of the control run is
assessed by comparing the convergence error of the control run with that of the reference
run. Details on these methods are given in Section 3.2.1 below.
3.1.1 Test setup and coding conventions
A coding convention was defined to distinguish between the various cases and types of
computations considered during this investigation. The run code consists of a number of
elements associated with the storm case, the type of SWAN simulation, the reduction factors
applied in the initial guess run and the convergence criteria associated with the initial guess
run and the control run. Each of these elements of the test setup is explained below. Firstly,
the coding of the three field situations is given, namely the two field cases presented in
Section 3.1.1, plus a sensitivity case with currents deactivated:
Code
Description
WZ1
Storm of 9 February 2006 with an opposing ebb current in the tidal inlet
WN1
Storm of 9 February 2006 with currents deactivated
WZ2
Storm of 16 December 2005 with a following flood current in the tidal inlet
Table 3.1:
Run codes for the three Amelander Zeegat storm conditions
For each field situation, four types of simulations were performed, as described in
Section 3.1 above. These run types carry the following coding:
Code
Description
B
Benchmark run, continued up to 50 iterations
R
Reference run on the detailed grid, using convergence criteria
I
Multigrid initial guess on a coarse grid, using convergence criteria
C
Multigrid control on the detailed grid, using convergence criteria
Table 3.2:
Run codes for the various types of simulations conducted
During the initial guess runs, the SWAN computations were conducted with a reduced grid
resolution in the geographical and/or spectral dimensions. For simplicity, in the present
study only reductions by an integer factor of 2 or 3 were considered, and reductions in x and
y in geographical space were set equal. The code to identify a run with a reduced resolution
is x[R
x]y[R
y]d[R ]s[R ], where R
xand R
yrefer to the grid reduction factor in geographical
Code
Description
x1y1d1s1
Reference run with full resolution
x2y2d1s1
Reduction in geographical space by a factor 2 in both directions
x3y3d1d1
Reduction in geographical space by a factor 3 in both directions
x1y1d2s1
Reduction of number of directions by a factor 2
x1y1d3s1
Reduction of number of directions by a factor 3
x1y1d1s2
Reduction of number of frequencies by a factor 2
x1y1d1s3
Reduction of number of frequencies by a factor 3
x1y1d2s2
Reduction of number of frequencies and directions, both by a factor 2.
x2y2d2s2
Reduction in all geographical and spectral dimensions, all by a factor 2.
Table 3.3:
Run codes for the various multigrid settings tested
The initial guess, obtained in as few iterations as possible, should provide a starting value
for the control run iteration on the detailed grid. It is expected that the number of iterations
determines the quality of the initial guess, where the quality should be interpreted as a
measure of the closeliness of this initial guess solution with respect to the final solution. It
is therefore of interest to vary the number of iterations of the initial guess run. In this study,
the number of iterations performed is influenced by the percentage of accepted points set in
the convergence criteria (see Section 2.3.4). This percentage is coded as Innn, where nnn is
the required percentage of accepted points multiplied by 10. Therefore, a 99% criterion for
the initial guess run is coded as I990. Since the initial guess is relatively cheap in terms of
computational time, it is possibly more economical to carry out relatively many iterations on
the coarse grid, which could lead to relatively few iterations on the detailed grid. The
convergence criteria considered for the initial guess are therefore tested for 99% and 99.8%.
It is also of interest to vary the convergence criteria, in terms of the number of accepted grid
points, for the control run. This percentage is coded as Cnnn, where nnn is the required
percentage of accepted points multiplied by 10. Since the control run iteration on the
detailed grid is time consuming, the associated convergence criteria are tested for slightly
less strict values, namely at 98% and 99%. This leads to the following codes to identify the
convergence criteria of an initial guess run and the associated control run:
Code
Description
I990C980
Initial guess requiring 99% of points converged; control run 98%
I990C990
Initial guess requiring 99% of points converged; control run 99%
I998C980
Initial guess requiring 99.8% of points converged; control run 98%
I998C990
Initial guess requiring 99.8% of points converged; control run 99%
Table 3.4:
Run codes for the various convergence settings in terms of required number of converged points.
3.2
Results
combinations of reduced geographical and spectral resolution and convergence criteria
defined in Section 3.1.1 were applied. Based on these results, a set of four promising
multigrid options was selected for further analysis and testing, in which the remaining storm
cases WN1 and WZ2 were also considered. As detailed in Section 3.1, the evaluation of
these tests was conducted in terms of the iteration behaviour, simulation speed and accuracy.
In the sections below, we first present the evaluation criteria applied to each of these
performance aspects, followed by a description of the results.
3.2.1 Evaluation criteria
Iteration behaviour
The iteration behaviour of the SWAN computation is investigated by inspection of the print
file showing the percentage of accepted points per iteration. In addition, the evolution of the
significant wave height H
m0, the spectral period T
m-1,0, the mean direction Dir and the
directional spreading Dspr is obtained from the 2D spectra that were output every iteration
at three test points. Figure 2.1 shows the location of these test points, namely one in the
central part of the tidal inlet, and two in the Wadden Sea interior.
Simulation speed
The simulation speed is estimated by counting the number of iterations needed for the
reference run, the initial guess run and the control run. A simple comparison of the number
of iterations is not a useful measure of required CPU time of the multigrid computation. The
first reason for this is that a run with a reduced resolution is faster than a run using the full
resolution. The gain in speed (in terms of the equivalent number of iterations) can be
approximated by dividing the number of iterations of the initial guess run by the product of
all reduction factors. Here, it is assumed that the CPU time per iteration is proportional to
the size of the computational grid. For example, for the multigrid option x1y1d2s2, this
reduction factor is 2x2=4. The second reason is that some time is spent in the handling of
the hotfiles. Outputting a hotfile, conversion to the required resolution and reading the
hotfile as the initial condition requires time. The extra time required for this data transfer is
estimated to be equivalent to the time required for one iteration.
Thus, the equivalent number of iterations N
MGCfor an initial guess and control run can be
estimated as:
1
2
I MGC C x yN
N
N
R R R R
.
(3.1)
In which N
Iand N
Care the number of iterations of the initial guess and control run,
respectively. The gain in speed of the multigrid method can be expressed as:
100%
R MGC
R
N
N
in which N
Ris the number of iterations of the reference run using the same convergence
criteria as the control run.
Accuracy
The accuracy of the multigrid method is investigated by a qualitative and a quantitative
comparison of the spatial distributions of the convergence errors in the four integral wave
parameters obtained with the multigrid control run and the reference run. Firstly, spatial
plots are made of the convergence errors of the control and reference runs, which are
respectively defined at every geographical grid point m as
, , ,
100%
C m B m m B mP
P
P
and
, , ,100%
R m B m m B mP
P
P
(3.3)
for the significant wave height H
m0and the spectral period T
m-1,0and
, ,
m
P
C mP
B mand
mP
R m,P
B m,(3.4)
for the mean period Dir and the directional spreading Dspr. Here P
C,mis any of the four
integral parameters produced by the control run at a geographical location m, P
R,mis the
corresponding parameter produced by the reference run, P
B,mis the corresponding parameter
of the benchmark run, and
mis the convergence error at that location, expressed as a
percentage or an absolute difference. Selected plots of this kind will be presented in
Section 3.2.2.
A quantitative measure of the average convergence error is computed as the mean
convergence error of all parameter values in a rectangular box positioned around the tidal
inlet (see Figure 2.1). For the control run this is computed as
, , 1
1
M C C m B m mP
P
P
M
(3.5)
and similarly for the reference run as
, , 1
1
M R R m B m mP
P
P
M
(3.6)
3.2.2 Results for storm case WZ1
Table 3.5 presents the test results of the complete set of simulations for condition WZ1,
which includes all selected multigrid options and convergence settings. The first three
columns of Table 3.5 contain the code names for the storm case, the multigrid option and the
convergence requirements, respectively. The fourth and fifth columns contain the number of
iterations of the reference run and the effective number of iterations of the initial guess and
control run according to Eq. (3.1). The following eight columns contain per integral wave
parameter the average convergence error
for the reference run and for the control run
according to the Eqs. (3.5) and (3.6).
Table 3.6 contains quantitative information of the gain in speed and accuracy of the
multigrid options tested on storm case WZ1. The numbers in this table are based on the
results presented in Table 3.5. The first three columns contain the code names for the storm
case, the multigrid option and the convergence settings, respectively. The fourth column
gives the gain in computational speed in terms of the saving in the number of iterations
according to Eq. (3.2). The next four columns give the relative gain in accuracy for the
significant wave height H
m0, spectral period T
m-1,0, mean wave direction Dir and directional
spreading Dspr, defined as
(
)
(
)
100%
(
)
R C RP
P
E P
P
.
(3.7)
Case Resolution Perc Nref Nmg mu(Hm0) mu(Tm-10) mu(Dir) mu(Spr) [m] [s] [deg.] [deg.] Ref. Contr. Ref. Contr. Ref. Contr. Ref. Contr. ---wz1 x2y2d1s1 i990c990 23 23.5 0.0069 0.0036 0.0380 0.0189 4.0244 1.4344 1.0233 0.5535 wz1 x2y2d1s1 i990c980 20 20.5 0.0090 0.0045 0.0460 0.0240 4.7278 1.8631 1.2247 0.6839 wz1 x2y2d1s1 i998c990 23 29.8 0.0027 0.0032 0.0163 0.0160 1.9607 1.1884 0.5480 0.4740 wz1 x2y2d1s1 i998c980 20 26.8 0.0090 0.0038 0.0460 0.0181 4.7278 1.1719 1.2247 0.4958 wz1 x3y3d1s1 i990c990 23 23.6 0.0069 0.0042 0.0380 0.0205 4.0244 0.9240 1.0233 0.4932 wz1 x3y3d1s1 i990c980 20 20.6 0.0090 0.0051 0.0460 0.0256 4.7278 1.2081 1.2247 0.6251 wz1 x3y3d1s1 i998c990 23 25.7 0.0027 0.0045 0.0163 0.0220 1.9607 1.5660 0.5480 0.6024 wz1 x3y3d1s1 i998c980 20 22.7 0.0090 0.0052 0.0460 0.0251 4.7278 1.6305 1.2247 0.6473 wz1 x1y1d2s1 i990c990 23 28.0 0.0069 0.0088 0.0380 0.0500 4.0244 4.9898 1.0233 0.9770 wz1 x1y1d2s1 i990c980 20 26.0 0.0090 0.0101 0.0460 0.0579 4.7278 5.5124 1.2247 1.0499 wz1 x1y1d2s1 i998c990 23 31.5 0.0027 0.0096 0.0163 0.0547 1.9607 5.3221 0.5480 0.9952 wz1 x1y1d2s1 i998c980 20 30.5 0.0090 0.0103 0.0460 0.0587 4.7278 5.5885 1.2247 1.0280 wz1 x1y1d1s2 i990c990 23 29.0 0.0069 0.0036 0.0380 0.0203 4.0244 1.3127 1.0233 0.5169 wz1 x1y1d1s2 i990c980 20 27.0 0.0090 0.0041 0.0460 0.0251 4.7278 1.5640 1.2247 0.6020 wz1 x1y1d1s2 i998c990 23 40.0 0.0027 0.0031 0.0163 0.0155 1.9607 0.9372 0.5480 0.2939 wz1 x1y1d1s2 i998c980 20 39.0 0.0090 0.0033 0.0460 0.0167 4.7278 0.9158 1.2247 0.2999 wz1 x1y1d2s2 i990c990 23 22.8 0.0069 0.0087 0.0380 0.0541 4.0244 4.8193 1.0233 0.9972 wz1 x1y1d2s2 i990c980 20 20.8 0.0090 0.0101 0.0460 0.0643 4.7278 5.3552 1.2247 1.0735 wz1 x1y1d2s2 i998c990 23 28.8 0.0027 0.0090 0.0163 0.0568 1.9607 4.9492 0.5480 0.9992 wz1 x1y1d2s2 i998c980 20 26.8 0.0090 0.0103 0.0460 0.0666 4.7278 5.4898 1.2247 1.0653 wz1 x1y1d3s1 i990c990 23 27.3 0.0069 0.0076 0.0380 0.0490 4.0244 4.7298 1.0233 0.9232 wz1 x1y1d3s1 i990c980 20 25.3 0.0090 0.0088 0.0460 0.0576 4.7278 5.2245 1.2247 0.9962 wz1 x1y1d3s1 i998c990 23 34.0 0.0027 0.0076 0.0163 0.0493 1.9607 4.7992 0.5480 0.9232 wz1 x1y1d3s1 i998c980 20 32.0 0.0090 0.0088 0.0460 0.0579 4.7278 5.2952 1.2247 0.9905 wz1 x1y1d1s3 i990c990 23 33.7 0.0069 0.0027 0.0380 0.0128 4.0244 0.4715 1.0233 0.2565 wz1 x1y1d1s3 i990c980 20 30.7 0.0090 0.0034 0.0460 0.0150 4.7278 0.6704 1.2247 0.3599 wz1 x1y1d1s3 i998c990 23 40.3 0.0027 0.0037 0.0163 0.0215 1.9607 1.5766 0.5480 0.4722 wz1 x1y1d1s3 i998c980 20 37.3 0.0090 0.0046 0.0460 0.0235 4.7278 1.6624 1.2247 0.4830 wz1 x2y2d2s2 i990c990 23 21.5 0.0069 0.0077 0.0380 0.0474 4.0244 4.4446 1.0233 0.9949 wz1 x2y2d2s2 i990c980 20 19.5 0.0090 0.0089 0.0460 0.0559 4.7278 4.9438 1.2247 1.0656 wz1 x2y2d2s2 i998c990 23 24.4 0.0027 0.0079 0.0163 0.0490 1.9607 4.5430 0.5480 1.0057 wz1 x2y2d2s2 i998c980 20 22.4 0.0090 0.0090 0.0460 0.0574 4.7278 5.0550 1.2247 1.0697
Table 3.5:
Summary of the number of iterations and convergence errors for the reference and control run per
Case Resolution Perc Iter E[Hm0] E[Tm-10] E[Dir] E[Spr] E[ave] [%] [%] [%] [%] [%] [%] wz1 x2y2d1s1 i990c990 -2.17 48.34 50.18 64.36 45.91 52.20 wz1 x2y2d1s1 i990c980 -2.50 49.78 47.78 60.59 44.16 50.58 wz1 x2y2d1s1 i998c990 –29.35 -18.08 2.14 39.39 13.49 9.23 wz1 x2y2d1s1 i998c980 –33.75 57.98 60.59 75.21 59.52 63.33 wz1 x3y3d1s1 i990c990 -2.42 38.96 46.03 77.04 51.80 53.46 wz1 x3y3d1s1 i990c980 -2.78 43.68 44.26 74.45 48.96 52.84 wz1 x3y3d1s1 i998c990 –11.86 -66.05 -34.91 20.13 -9.93 -22.69 wz1 x3y3d1s1 i998c980 –13.33 42.46 45.30 65.51 47.15 50.11 wz1 x1y1d2s1 i990c990 -21.74 -26.55 -31.47 -23.99 4.52 -19.37 wz1 x1y1d2s1 i990c980 –30.00 -11.86 -25.84 -16.60 14.28 -10.00 wz1 x1y1d2s1 i998c990 -36.96 -254.24 -234.72 -171.44 -81.62 -185.51 wz1 x1y1d2s1 i998c980 –52.50 -14.30 -27.71 -18.20 16.06 -11.04 wz1 x1y1d1s2 i990c990 -26.09 48.63 46.55 67.38 49.49 53.01 wz1 x1y1d1s2 i990c980 –35.00 54.66 45.37 66.92 50.85 54.45 wz1 x1y1d1s2 i998c990 –73.91 -13.65 5.39 52.20 46.36 22.58 wz1 x1y1d1s2 i998c980 –95.00 63.75 63.66 80.63 75.51 70.89 wz1 x1y1d2s2 i990c990 1.09 -26.12 -42.45 -19.75 2.55 -21.44 wz1 x1y1d2s2 i990c980 -3.75 -11.97 -39.76 -13.27 12.35 -13.16 wz1 x1y1d2s2 i998c990 –25.00 -231.37 -247.52 -152.42 -82.36 -178.42 wz1 x1y1d2s2 i998c980 –33.75 -14.19 -44.85 -16.12 13.01 -15.54 wz1 x1y1d3s1 i990c990 -18.84 -10.25 -28.84 -17.53 9.78 -11.71 wz1 x1y1d3s1 i990c980 –26.67 2.22 -25.21 -10.51 18.66 -3.71 wz1 x1y1d3s1 i998c990 –47.83 -179.70 -201.96 -144.77 -68.48 -148.73 wz1 x1y1d3s1 i998c980 –60.00 2.66 -25.97 -12.00 19.13 -4.05 wz1 x1y1d1s3 i990c990 -46.38 61.76 66.34 88.28 74.94 72.83 wz1 x1y1d1s3 i990c980 -53.33 62.75 67.38 85.82 70.61 71.64 wz1 x1y1d1s3 i998c990 –75.36 -38.38 -31.60 19.59 13.83 -9.14 wz1 x1y1d1s3 i998c980 –86.67 49.45 48.89 64.84 60.56 55.93 wz1 x2y2d2s2 i990c990 6.52 -11.54 -24.76 -10.44 2.77 -10.99 wz1 x2y2d2s2 i990c980 2.50 1.77 -21.49 -4.57 13.00 -2.82 wz1 x2y2d2s2 i998c990 -5.98 -191.51 -200.00 -131.70 -83.54 -151.69 wz1 x2y2d2s2 i998c980 -11.88 0.00 -24.92 -6.92 12.66 -4.80