Home Engineering

# Numerical model

Performing numerical simulations of three-dimensional breaking waves requires a large number of mesh grid nodes and long CPU time calculations in order to compute the hydrodynamics from the largest to the smallest length and time scales (Lubin et al., 2011). Moreover, robust and accurate methods are needed for precise numerical simulations. Recent progress in computational power allowed us to run fine three-dimensional simulations which gave us the opportunity to study, for the first time, the fine vortex filaments generated during the early stage of the plunging wave breaking process, and the subsequent air entrainment.

In this section, we discuss the results from the numerical simulations performed to describe the whole breaking event (Lubin et al., 2006; Lubin and Glockner, 2015; Lubin et al., 2019). Therefore, the governing equations and numerical methods will only be briefly summarized here, further details can be found in (Lubin and Glockner, 2015).

We solve the Navier-Stokes equations in air and water, using the Large Eddy Simulation (LES) framework. The resulting set of equations describing the entire hydrodynamic and geometrical processes involved in the motion of non-miscible multiphase media is given by Equations 1-3 below:

and

where u is the velocity, C the phase function used to locate the different fluids, t the time, p the pressure, g the gravity vector, p the density, p the dynamic viscosity. The turbulent viscosity //.( is calculated with the Mixed Scale model (Sagaut, 1998). x, z and у are respectively the horizontal, vertical and transverse coordinates. ux, uz and uy are the corresponding velocity components of the velocity vector u. The air and water physical properties were used (see Table 1). The main limitation was surface tension is not considered in this study, as discussed by (Lubin and Glockner, 2015) and in the discussion section of this chapter.

The interface tracking is achieved by a Volume Of Fluid method (VOF) and a Piecewise Linear Interface Calculation (PLIC) (Youngs et al., 1982; Scardovelli and Zaleski, 1998). This method has the advantage of building a sharp interface between air and water. A phase function C is used to locate the different fluids. The real air and water physical properties were used. The numerical tool has already been shown to give accurate results for environmental and industrial applications (Lubin et al., 2006; Lubin et al., 2010; Glockner et al., 2011; Brouilliot and Lubin, 2013). All the details of the numerical methods used to discretize and solve the equations are given in (Lubin and Glockner, 2015).

The main evolution between (Lubin et al., 2006) and (Lubin and Glockner, 2015; Lubin et al., 2019) was the use of a parallel version of the numerical tool, which allowed us to obtain much more detailed results as shown in the next section.

The numerical tool runs on parallel distributed memory architectures in production mode. The computational domain is partitioned at the beginning of the code execution in order to reduce the memory space required. The load balance is achieved with an accuracy of one mesh in each direction of the space. The equations modeling a physical problem are discretized implicitly, thus leading to the resolution of linear systems. To solve it, we use the libraries of open source solvers and preconditioners made available to the international scientific community by specialists in the field, namely HYPRE. The Navier-Stokes equations are discretized by the finite volume method on a Cartesian mesh. A pressure correction method (Goda) is used for the treatment of incompressibility and the coupling of velocity and pressure. The prediction step is solved by a BiCGStab preconditioned by the Jacobi method. The pressure increment correction step is solved by a GMRES associated with a PFMG multigrid preconditioning of the HYPRE library. This choice has proved to be the most efficient on simulations of multiphase problems.

Table 1: Physical parameters used for the numerical simulations.

 Water density, pw 1000 kg.m~3 Water viscosity, pw 1 x 10 3 kg.m 1.s 1 Air density, pa 1.17G8 kg.m~3 Air viscosity, //„ 1.85 x 10~5 kg.m-l.s~l Gravity, g 9.81 m.s~2 Surface tension Neglected

Table 2: Mesh grid densities used in the previous works and the actual time taken from the start of the simulation to the end, from 625 000 to 0.819 billion of mesh grid points.

 Cases Mesh grid densities Mesh grid resolutions (m) Processors Wall-clock time (Lubin et al., 2006) 250 x 100 x 25 Д.т ~ Az = Ay = 4.10-4 1 Several weeks (Lubin and Glockner, 2015) 256 x 125 x 50 Д.т ~ Az = Ay = 4.10-4 8 1 hour (Lubin and Glockner, 2015) 512 x 250 x 100 Аж ~ Дг = Ay = 2.10-4 256 Few hours (Lubin and Glockner, 2013) 1024 x 400 x 200 Аж ~ Дг = Ay = 10-4 576 2 days (Lubin and Glockner, 2015) 1024 x 500 x 200 Аж ~ Дг = Ay = 10-4 1024 1 day (Lubin and Glockner, 2015) 2048 x 1000 x 400 Аж ~ Дг = Ay = 0.5 x 10-4 8192 3 days

The performances of Thetis have been verified on different supercomputers. The seal- ability of the different steps of the code with 50ж50ж50 cells per core, from 1 node to 128 nodes, has been tested. The total number of points was 256 million mesh cells on 2048 cores and corresponded to the largest simulations we could perform. The scalability proved to be in accordance with the theoretical scalability for the whole part of the calculation time spent in the numerical code (filling of the matrices, resolution of the advection equation). For the time spent in the solvers, it is in conformity with the one expected for this kind of exercise with a contained increase of the computation time.

More than 100 million grid points have been used to discretize the three-dimensional numerical domain (1024 x 500 x 200), with non-uniform mesh grid cells. The grid was evenly distributed in longitudinal and transverse directions (Да- = Ay = 10-4 m). In the vertical direction, the grid was clustered with a constant grid size Дг = 10-4 m in the ffee-surface zone. The three-dimensional numerical domain has been partitioned into 1024 subdomains (one processor per subdomain). The computing time was approximately 24 hours, with 1024 cores, for a simulated physical time of 0.88 s, which comprised the initial stage of wave breaking and the subsequent air entrainment. As presented in Table 2, the parallel version of the numerical code can give access to very fine length scales providing high performance computing facilities.

 Related topics