Comptes Rendus Mécanique

. We o V er a synthetic exposition on the state of the art for Computational Fluid Dynamics (CFD) relevant to the Navier–Stokes equations with the Boussinesq approximation, smoothly blending a history of the ﬁeld with an accessible exposition of the underlying mathematical rationale, supporting theorems and a survey of the landmark results on the subject. Attention is paid to those e V orts which have opened up new paradigms in modern CFD, the techniques that have emerged as leading candidates as well as their diverse reverberations in producing advancements in our understanding of thermal buoyancy convection (especially Rayleigh–Bénard convection and the so-called Hadley Flow) in both laminar and turbulent conditions.


Introduction
Buoyancy driven convection is the dominant genus of behavior experienced by any system undergoing fluid motion as a result of heating in a steady and unchanging gravitational field. The basic physical principles at the root of this kind of convection are upheld by a difference in density (as a result of differential heating) across the geometry in which the considered fluid is contained. It is well known that when a fluid is heated it receives an intake of internal energy and therefore it undergoes thermal expansion and a decrease in density. On Earth, while hot and less dense fluids tend to rise, denser (relatively colder) fluids tend to sink; accordingly, a circulatory pattern is produced, which involves the entire bulk of the considered fluid. As a result, heat is transferred at a rate significantly faster than if it was just through thermal diffusion alone.
These phenomena are omnipresent on the surface of our planet and affect an uncountable number of natural and technological processes. This is the main reason for which the degree of fidelity of the mathematical and numerical models needed to represent them adequately has been an area of keen interest for a long time in the main fields of engineering and physical science (and it shows no obvious sign of running out of interest yet).
Most remarkably, in a still ongoing endeavor to make accessible problems which would otherwise present intractable computational challenges, a specific branch of Computational Fluid Dynamics (CFD) has been developed, which relies on a fundamental idea initially introduced by Oberbeck [1], rationalised by Boussinesq [2] and put on more precise theoretical grounds by various authors over the years (see, e.g., Chandrasekhar [3], Mihaljan [4], Gray and Giorgini [5], Mahrt [6] and Zeytounian [7] just to cite a few).
When investigating the mathematics behind buoyancy (gravitational) convection induced by thermal effects, it has been understood that, as opposed to building a specific strategy to account for the variation of density in each of the terms where it appears, vast simplifications can be made to the governing equations that describe the underlying physics by considering density constant everywhere with the only exception of the "production term" in the momentum equation. Used in combination with specific classes of numerical methods elaborated "ad hoc" for incompressible flows (which we will describe later in the present review), a sound consensus has been reached in the Scientific Community that a proper utilization of the Boussinesq assumption can rigorously simplify the required computational efforts whilst maintaining reasonable adherence to reality and mathematical integrity. This synergy has led over the years to the establishment of a common, "robust" and to some extent "elegant" theoretical framework, which has played the role of a common point of origin from which many studies in the community have departed. In this context, the present review explores the knowledge accumulated during the last 50/60 years (since the introduction in the 60s of the famous pressure-based and streamfunction-vorticity methods for isothermal incompressible flow). At the moment, the results are segregated across various papers and for the interested readers wishing to get in touch with this specific field and/or to understand the related historical background, collecting the most relevant information would be a humongous task.
In this regard, the present work might be seen as an extension of an earlier review entitled "on the Nature of Fluid-dynamics" by Lappa [8]. As illustrated above, however, the motivations underlying the present discussion are quite different. More specifically, here the focus is shifted from the main physical (and to some extent "philosophical") principles on which the discipline of fluid-dynamics is based to more implementative or pragmatic aspects (that is, the intermingled processes by which the complexity of the equations of fluid-dynamics can be mitigated and the associated simplifications can lead to more efficient or convenient solution strategies, see Section 2). In other words, the present work examines the art of balancing wellposedness and computational cost in the general area of thermally driven buoyancy convection. As such, rather than a systematic and comprehensive review of the existing literature on these problems (for which readers are referred to existing books on the subject, e.g., Lappa [9,10]), what follows provides a thematic overview of potentially key aspects in the development of effective numerical approaches (Section 3). Obviously, a brief illustration of the most important (landmark) results, which have been obtained accordingly is also provided (Section 4). These findings can be said to have influenced deeply current knowledge in this field and associated "practices".

The Boussinesq approximation and the balance equations
The purpose of this section is to show how the incompressible formulation of the governing partial-differential equations for non-isothermal Newtonian flow (which is based upon constant density advection, divergence-free flow, and the Boussinesq gravitational body force approximation) differs from the general conservation form (which only assumes that the fluid is a continuum, see, e.g., Lappa [8]). The starting point of such a process is obviously represented by the classical (complete) version of such balance equations, which for mass, momentum and (internal) energy read where V is the fluid velocity, u int is its internal energy and ρ, µ and λ represent physical properties (density, dynamic viscosity and thermal conductivity, respectively). Evidently t is time and ∇ is the classical nabla operator. The following relationships also hold: where I is the unit matrix, τ is the well-known stress tensor (p being the fluid pressure), F b is the buoyancy force (formally playing the role of production term in the momentum balance equation) and i g is the unit vector along the direction of gravity. Numerical solution of these equations in the most general situation (fully compressible case) generally requires a specific class of methods, which come under the heading of "densitybased solvers" (see, e.g., Gauthier [11], Martineau et al. [12]). In this regard, the aforementioned Boussinesq approximation can be seen as a vital mathematical tool able to "project" these equations into a much more manageable framework (which, as developed later in Section 3, effectively underpins the ability of many solvers to solve them in a much more efficient way). The hierarchy of steps leading from (1)-(6) to the Boussinesq simplified (incompressible) variant can be summarized as follows.
First of all, with the exception of the density, all the other physical properties of the fluid are considered constant (this being one of the "pillars" on which the simplified approach relies). By virtue of this hypothesis, the dynamic viscosity and thermal conductivity can be taken out of the terms where they are formally subjected to differentiation. Accordingly: ∇ · (λ∇T ) = λ∇ 2 T Secondly, the last term at the right hand side of the internal energy equation (formally accounting for the production of internal energy due to viscous dissipation) is generally neglected. Although, strictly speaking, this simplification is not part of the Boussinesq approximation, it has enjoyed a widespread use in the existing literature on the numerical simulation of thermal flows (from a practical standpoint, this contribution becomes significant only when some specific problems are considered, Gebhart [13]).
The "core" of the Boussinesq approximation can be seen in the last step needed to transform these equations. It operates by linearizing the buoyancy term by assuming a direct proportionality law between density and temperature. This law can be expressed synthetically as where β T is simply known as the thermal expansion coefficient and T 0 is a reference value for temperature (temperature corresponding to the condition for which ρ = ρ 0 ). This relationship can be interpreted or derived in two ways: first, intuitively (the engineer's point of view), assuming for simplicity that in a sufficiently small neighbourhood of a certain reference condition, the density decreases with temperature according to a direct and linear proportionality trend, and then more rigorously (the mathematician's point of view), considering the related multivariate Taylor expansion in series and neglecting all the terms of order higher than one. On the basis of these two alternate perspectives, the significance of β T can readily be seen as: either a physical parameter accounting for a relative change in the volume of the fluid for a unit increase of the temperature, or the first derivative of density with respect to temperature. By simply replacing the buoyancy force at right-hand side of (2) with a linearized form of it, assuming a constant density elsewhere and taking into account (7) and (8), the following practical formulation is finally obtained: and where ν = µ/ρ 0 . The last equation is amenable to another simplification considering that the internal energy can directly be expressed as a function of the temperature T , i.e. du int = C v dT where C v is the specific heat at constant volume. Taking into account that for many fluids C v ∼ = C p where C p is specific heat at constant pressure and introducing the thermal diffusivity α defined as α = λ/ρ 0 C p , the energy equation can finally be cast in compact form as: Moreover, assuming reference quantities for time, velocity and pressure based on such a thermodynamic quantity, i.e. L 2 /α, α/L and ρ 0 (α/L) 2 , where L is a reference length, and scaling the temperature with the prevailing temperature difference ∆T , the overall set of governing equations can be put in a shape particularly convenient for the identification of the main non-dimensional parameters (characteristic numbers) on which these problems depend: ∂T ∂t where are the Prandtl and Rayleigh numbers, respectively (the so-called "equivalent" Grashof number being Gr = Ra/Pr). It is worth emphasizing that as this definition of Ra depends directly on the thermal expansion coefficient, it only makes sense in the framework of the Boussinesq approximation (alternate approaches exist where its definition is slightly different, see, e.g., Lappa [14]). Equations (14)- (16) and the related set of characteristic numbers have enjoyed a widespread use in the literature. Using them as a basis, it has been understood that flow intensity and the hierarchy of bifurcations displayed by thermally induced gravitational flows depend on both Pr and Ra. The relative orientation of gravity and the prevailing temperature gradient also plays an important role. If gravity and temperature difference are concurrent, flow is produced only for a sufficiently high temperature difference (larger than a given "critical" value), whereas if they are perpendicular no threshold must be exceeded (fluid motion from an initially quiescent state is produced regardless of the value taken by the Rayleigh number). Notable differences also concern the specific route of evolution taken by buoyancy convection in these two archetypal situations as Ra is increased (through secondary, tertiary and higher-order states until fully developed chaos appears). As a result, a true dichotomy has emerged in the literature with two almost independent lines of inquiry running in parallel, one dedicated to the behavior of fluid systems which are uniformly heated from below and cooled from above (leading to the so-called Rayleigh-Bénard (RB) convection, Refs [15,16]) and another dealing with the situations where heating and cooling are applied along vertical sidewalls (the Hadley flow, Ref. [17]).

Numerical methods
Yet, stemming from (14)- (18), is the realization that they are not usually responsive to analytical solutions, unless very simple cases are considered (Lappa [9]). This is one of the reasons for numerical strategies having gained considerable attention over the last 50 years. Although a mathematical basis for the study of the aforementioned instabilities and evolution towards chaos is often represented by theory of stability and the theory of bifurcation (which generally rely on alternate approaches based on linearized versions of the balance equations, the evaluation of Jacobian matrices and related eigenvalues), these methods require the knowledge of "basic" fields as a necessary pre-requisite. Unfortunately, in most cases (in practice in all the circumstances where analytical expressions are not available), the only way to determine these "basic states" is through numerical solution of the governing equations in their complete (non-linear) form. All these factors have contributed to cement the view that there is merit in resorting to considerable labor based on numerical integration. This modus operandi offers the possibility to perform parametric analyses of the individual effect of each influential factor, providing a stepby-step improvement of the understanding of experimental results for cases where general analytic solutions are not available or situations where other methodologies have been unproductive or have reached apparent limits of usefulness. In particular, through CFD, non-linear behaviors can be easily accessed and studied. Last but not least, CFD allows the investigation of physical fluid phenomena that are difficult or impossible for experiments, e.g. situations where hazards are present (explosions, radiation, and pollution) or problems where the nature of the fluid itself (opaque high-temperature melts, reactive materials) makes direct experimental analysis rather difficult.
Interestingly, however, the existing literature shows that not always the equations have been addressed in the "pristine" Boussinesq form represented by (14)- (18). Quite often, they have been subjected to mathematical manipulations aimed to turn them into equivalent configurations thought (for some reason) to be more convenient from a numerical point of view. Many of these innovative or alternate CFD techniques have been elaborated by creatively bringing together some of the aspects which are specific for incompressible flows and existing theorems or paradigms which are typical of tensor and vector analysis, as further discussed in the remainder of this section.
In particular, we will group the existing methods elaborated over the last 50 years in three main categories (and some related sub-variants) according to some common underpinning principles, which can distinguish one class from another.

Primitive-variables methods (Pressure-based solvers)
We start from a description of this class of methods (pressure-based solvers) not only because from a purely historical standpoint they represent a successful attempt to devise a set of numerical techniques able to take advantage of the postulated incompressible nature of the considered flow, but also because they have earned a reputation over the years as category of numerical algorithms for which the long-standing well-posedness issue, (Gresho [18]) has been explored to a significant extent (we will come back to this important concept later).
These methods have enjoyed a widespread use under several names or "headings": fractionalstep technique, projection method or simply pressure-correction approach (in the present review simply referred to as primitive-variables strategy given the related choice of independent unknowns). These algorithms were pioneered by Harlow and Welch [19], Harlow et al. [20] and Welch et al. [21]. Other authors, Chorin [22] and Temam [23,24], independently developed a similar strategy and their own implementation of this must also be quoted.
Although the specifics of the techniques used by different authors vary, the basic idea is essentially the same. These methods stem from the specific mathematical properties of (14)-(18) (linked to the incompressible nature of the considered problem) and the coupling that is established accordingly among the primitive variables V , p and T . These are intimately related to one another through the momentum equation (with regard to pressure and velocity), and via the energy equation and the presence of the buoyancy (Boussinesq) term in the momentum equation (for what concerns velocity and temperature).
In addition to these levels of coupling, a full understanding of the rationale on which this category of numerical techniques is based, however, also requires modeling them in a consistent way taking into account other "constraints". As an example, the first of these requirements involves an adequate numerical representation of the vorticity (ζ = ∇ ∧ V ) associated with the considered flow, while a second level of complexity in such a modeling hierarchy is represented by the need to make the resulting velocity field "solenoidal". This second requirement (incompressibility constraint) brings problems of its own, including the issue of allowing the velocity field to satisfy two different equations at the same time.
In practice, a solid mathematical basis for the development of these methods taking into account all such restrictions can be found in some well-known theorems. These include the socalled Hodge or Helmholtz decomposition theorem, which guarantees that any vector field can always be split into the gradient of a scalar function and a solenoidal part (Helmholtz [51]). The theorem of the inverse calculus can also be invoked, which ensures that in a simple connected domain a vector field is uniquely determined (fixed) when its divergence and curl (its "vorticity" if the considered vector is the fluid velocity) are assigned (Ladyzhenskaya [52]) together with its normal component at the boundary. These theoretical circumstances can directly be cast in a useful form to yield a practical time-marching procedure.
In this regard, it is convenient to start from the first of the properties of the velocity field, i.e. its vorticity. Most notably, this is retained if an initial approximation to the momentum equation, where the pressure term is neglected (namely (19)), is initially advanced to determine an intermediate or provisional velocity field denoted by V * , In order to illustrate the principles on which this approach is based, in this equation, for simplicity, the time derivative of velocity has been replaced with the corresponding discretized version obtained by using a classical backward formula where ∆t is the time integration step. As the gradient of pressure has been artificially removed, obviously, the velocity field determined through this equation has no direct physical meaning. Nevertheless, the vorticity associated with V * still makes sense because vorticity does not depend on the gradient of pressure; indeed by definition: Reintroducing the previously neglected gradient of pressure, however, one may express the physical velocity field simply as: where C is a constant and p still plays the role of an unknown. In practice, with this class of methods, Equation (21) if forced into (14) in order to obtain an effective equation for the determination of the otherwise unknown p: This extra relationship formally closes the system of equations from a numerical point of view if C is set equal to ∆t . Notably, unlike compressible flow where pressure is generally determined using the gas state equation (and therefore it depends on density and temperature computed solving the continuity and energy equations, respectively), for incompressible flow p is only a function of the fluid velocity.
To summarize, the sequence of numerical steps required for the determination of an incompressible velocity field can be sketched as follows: (1) discretization of (19) for the calculation of V * , (2) solution of the elliptic Poisson equation (22) for the determination of p, (3) utilization of (21) for the calculation of the final velocity field and eventually (16) for the temperature (generally determined in a segregated way).
As readers will easily realize at this stage, Equation (21) can be seen as an application of the aforementioned decomposition theorem where V and p play the role of solenoidal part and scalar function underpinning V * , respectively. Moreover, as ∇ · V n+1 = 0 and ∇ ∧ V n+1 = ∇ ∧ V * these numerical methods can also be seen as a spinoff of the Ladyzhenskaya theorem (as the divergence and curl of the velocity field determined using (19), (22) and (21) are the same that would be obtained through solution of the original set of equations).
These equations must obviously be complemented with the corresponding physical boundary conditions (PBC), i.e. the mathematical constraints which specify the known physical behavior of one or more of the unknowns at the boundaries (e.g., impermeability and no-slip conditions for the velocity on the solid walls and constant temperature or assigned heat flux for the energy equation). However, such "information" is not sufficient to close the problem as "something" must also to be specified about the behavior of pressure on such boundaries. These extra settings are generally called "numerical" boundary conditions (NBC) because they are needed for the numerical method but not explicitly provided by the physics of the problem (this being the case of (22)). The problem about a proper "choice" of such additional conditions (i.e. such that the resulting problem can be considered "well-posed") has been debated in the scientific community for a relatively long time. Despite some minor differences, consensus has emerged that they have to be specified in terms of normal component of ∇p at the boundary determined via some "compatibility relations" (Gresho and Sani [53], Gresho [18], Karniadakis et al. [54], and Petersson [48]).
The enormous versatility afforded by this computational architecture and the advances in terms of conditions that make it mathematically consistent go some way to explain the substantial growth and development over the years of these methods. They have earned a reputation, especially for their ability to capture time-dependent flows and a variety of related instabilities (Lappa [9]).
As an additional proof of their robustness and flexibility, this class of techniques has even been extended (relatively recently) to weakly compressible flows, (circumstances where the Mach number is relatively small, but the temperature difference is so high that the Boussinesq approximation is no longer reliable). This endeavor has led to a new family of methods known as "variabledensity" techniques (to be distinguished from the aforementioned density-based solvers due to their ability to heavily rely on a pressure-based equation very similar to that presented in this section). From a purely historical standpoint, this new set of computational variants has its roots in the initial numerical experiments by Paolucci [55] and Majda and Sethian [56]. These authors showed that by filtering out fast acoustic waves (known to be the major source of issues in density-based due to the associated need to keep the time integration step very small to maintain numerical stability), time integration steps comparable to those of pressure-based solvers could be used. The later asymptotic analysis conducted by Roller and Munz [57] provided a relevant theoretical context for further elaboration of the approach proposed by these authors. For the first time, it was illustrated analytically that "the pressure" in a fluid can be decomposed into three parts with different physical meanings, these accounting separately for thermodynamic effects, acoustic wave propagation, and the balance of forces (pressure dynamics effect). Along these lines, Müller [58] clearly showed that if the primitive variables are replaced with equivalent series expansions in terms of the Mach number and a low-Mach-flow flow is considered, the thermodynamic pressure can be considered homogeneous in space (its variation in time being governed the gas state equation). Moreover, the dynamic pressure can be decoupled from density and temperature fluctuations, leading to the remarkable possibility to determine it using an equation formally similar to (22) (i.e. obtained by combining the continuity and the momentum equations, see also Beccantini et al. [59], Benteboula and Lauriat [60] [70] (some interesting related benchmarks for thermal buoyancy flow are also available [71,72]). Most recently, Lappa [14] has shown that such approach can even be applied to circumstances where the temperature is so high that the microscopic vibrational degree of freedom of the gas molecules is excited and other approximations typical of ideal gases (such as the principles of molecule energy equipartition) cease to be valid. As correctly observed by Munz et al. [67], in general, these methods are more robust than density-based solvers (nevertheless, their range of validity is more limited as they cannot deal with intermediate or high values of the Mach number).

Vorticity methods
While the theoretical underpinnings for the process of abstraction leading to the introduction of the class of methods treated in Section 3.1 emerged naturally out of the mathematics behind some theorems, another important category of techniques for incompressible flows with the Boussinesq approximation originated from the realization that time-marching procedures could also be introduced by "turning around" the direct relationship existing between velocity (or some related "potentials") and the vorticity of the fluid (mathematically represented by ζ = ∇ ∧ V ). Notably, a specific balance equation for the latter can be directly introduced by simply taking the curl of (15). Recalling that ∇ ∧ (∇p) = 0, both V and ζ = ∇ ∧ V are div-free and using some well-known vector identities: and the governing equation for vorticity in an incompressible flow with the Boussinesq approximation can be recast as ∂ζ ∂t + V · ∇ζ = ζ · ∇V + Pr ∇ 2 ζ + Pr Ra ∇ ∧ T i g (25) Although this equation apparently solves a thorny problem, that is, the lack of a time evolution equation for the pressure (obliging the primitive-variable methods to determine a dedicated extra equation through mathematical manipulations), however, Equation (25) is not enough to close the problem from a mathematical point of view as it contains both the unknowns ζ and V (which leads to the need for additional mathematical developments and rationale, see Sections 3.2.1-3.2.3).

The streamfunction-vorticity formulation
Notably, at the same time, when Harlow and Welch [19] were developing a seminal version of the primitive-variable approach at the Los Alamos Scientific Laboratory (as illustrated in Section 3.1), another researcher employed in the same laboratory (Fromm [73]) provided another possible cue to get a mathematical closure of the problem. He started to work on the idea to introduce an additional equation by using a well-known property of two-dimensional (2D) incompressible fluid flow, i.e. the existence of the so-called streamfunction. Following this simple intuition, the vorticity balance equation was simplified as: (ζ · ∇V being equal to zero, given the postulated 2D nature of the considered flow); moreover, using the well-known relationship between the streamfunction and the flow velocity components u and v (along x and y, respectively), namely u = ∂ψ ∂y and v = − ∂ψ ∂x (27) an extra equation, in place of the continuity equation (implicitly satisfied by (27)), was introduced as Thereby, a 4-equation formulation was obtained (Equations (26), (28), (27), (16)) that is able to produce automatically divergence-free velocity fields. Notably, this approach (generally simply known as "ψ-ζ") has enjoyed a significant utilization for the simulation of buoyancy-driven convection especially for those situations where the development of the flow along the third spatial direction has been postulated to be negligible (an assumption often used in the early stages of development of CFD, i.e. the 60s, 70s and 80s, when the scarce speed and memory of available computers was a significant limitation preventing investigators from considering realistic three-dimensional configurations).

Potential vector-vorticity
Continuing in a similar vein to Fromm [73], other investigators pursued further the idea to work with the vorticity equation in place of the momentum equation and make the continuity equation implicitly satisfied using a "potential vector". The main focus was on the attempt to overcome the constraint of two-dimensionality and allow the flow to develop a component along the (previously neglected) third direction. Examples of such effort are the works by Aziz and Hellums [74], and Mallinson and de Vahl Davis [75]. They introduced a three-dimensional (3D) variant of the ψ-ζ method using a potential vector Ψ satisfying the relationships which make it "unique" according to the aforementioned theorem of the inverse calculus (provided the component of Ψ perpendicular to the boundary is also specified). Moreover, they exploited an extra equation obtained by taking the curl of the velocity field, i.e.
This equation corresponds to three scalar elliptic equations in the three-dimensional case and formally closes the problem from a mathematical standpoint together with (29), (26) and (16), the associated formal sequence of numerical stages being: (1) solution of (26) for the determination of ζ, (2) solution of the elliptic Poisson equation (31) for the determination of the three components of the potential vector, (3) utilization of (29) for the calculation of the final velocity field and eventually (16) for the temperature (generally determined in a segregated way). As readers will immediately realize by examining such sequence, although (like other methods based on the vorticity) this approach "filters out" the gradient of pressure, it does not remove the problem relating to the need to specify "extra" boundary conditions (NBCs) having poor physical significance (the conditions on the domain boundary for Ψ).
Although Hirasaki and Hellums [76] could show to a certain extent that the constraint of wall impermeability implies that Ψ is normal to the boundary (i.e. that the components tangential to the boundary must vanish) and that the normal derivative of its normal component must be zero, the early promise of this approach as a viable alternative to primitive-variables (pressure-based) or other methods for the analysis of 3D flows has been tempered by a lack of consensus and clarity in identifying the boundary conditions which effectively make the problem well-posed from a mathematical point of view (Hirasaki and Hellums [77], Richardson and Cornish [78]).

Velocity-vorticity formulation
Towards the end to overcome the problems due to the introduction of an extra unknown vector (the potential vector itself ) while retaining the possibility to address 3D flows, other vorticitybased formulations have been elaborated by replacing the quantity Ψ appearing in (31) directly with the fluid velocity, i.e. by considering the following mathematical equality in place of (31): Comparing this alternate approach with the earlier vorticity-vector-potential formulation, it becomes clear that only the boundary conditions aspect has to be considered because the dynamic aspect is governed by the vorticity transport equation in both formulations. In this regard, it is worth noting that, while (32) is generally integrated assuming on the boundary the known behavior of the velocity, the vorticity equation requires the use of an "extrapolation" or the vorticity definition itself to determine its missing values on the boundary (see, e.g., Farouk and Fusegi [79], Speziale [80], Stella and Guj [81], Dacles and Hafez [82], Napolitano and Pascazio [83], Guj and Stella [84], Pascazio and Napolitano [85], Ruas [86], Lo et al. [87], Maekawa [88], Kosaka and Maekawa [89]). Although, on the one hand, for the aforementioned reasons, several authors have applied these methods to a reasonable extent, on the other hand, their diffusion has been hindered by the increased computational cost with respect to equivalent formulations based on primitive variables (requiring the solution of one elliptic equation for the pressure only in place of the three equivalent equations required for the determination of vorticity components). Superimposed on this is the fact that some concerns still exist about the proper knowledge of the boundary conditions that can make the problem well-posed from a computational/theoretical point of view and accurate from a numerical standpoint. There are no physical boundary conditions (PBCs) on the vorticity; moreover, more grid points are needed in a momentum boundary layer if a vorticity method is used in comparison to a primitive-variables method to get the same accuracy. Finally, as observed by Gresho [18], the implementation of an effective solid-surface boundary condition is often a weak point.

The poloidal-toroidal decomposition
There is another independent formulation, which has led to valuable results in the realm of thermal (buoyancy) convection. This alternate strategy has been largely used by Busse and coworkers in their studies aimed at the characterization of Rayleigh-Bénard convection in several geometrical configurations. These authors envisioned another possible approach, i.e. the possibility to use the so-called poloidal-toroidal decomposition (which has still a foothold in the aforementioned Helmholtz decomposition theorem, Ref. [3]).
Expanding on this approach, it is worth starting from the simple argument that, on the basis of the assumption of solenoidal (divergence-free) velocity field, the following vectorial representation is possible (i g is the unit vector along the direction of gravity): Notably, for rectangular geometries, this decomposition has been proven to be unique and complete (Schmitt and von Wahl [90]). Related to this, is that the number of unknowns is reduced by one, i.e. while in primitive variables an unknown vector field has 3 unknown scalar components, only two scalar unknowns are left in when using this decomposition, namely, the so-called "pol" (ϕ) and "tor" (φ) fields. As an example, assuming a 3D Cartesian reference system with the y-axis corresponding to the sense and direction of gravity, this specific strategy results in: where the ξ operator reads as: where the η operator reads as: The most useful property of these operators is that they are orthogonal: other useful relations are: Following this approach, and taking the y component of the (curl) 2 and the curl of the momentum equation, the following equations are finally obtained for ϕ and φ: with the boundary conditions: ϕ = ∂ϕ/∂y = φ = 0 along horizontal solid walls (as illustrated further in Section 4, these equations have generally been integrated assuming periodic boundary conditions along the vertical boundaries). As a concluding remark for this section, we wish to highlight that the preceding illustration of all these alternate possible numerical strategies should not be misread as implying that these paradigms have had a linear historical trajectory or have been specific to particular forms of buoyancy convection. Different approaches have always simultaneously been involved in the investigation of these subjects, as discussed in more detail in the next section.

A survey of landmark results
CFD for incompressible flow has evolved over the last 50 years to be strongly integrated in studies of thermally driven buoyancy (gravitational) convection. These developments (especially the improvements in the speed and memory capacity of available computational resources) have rendered CFD complementary to experiments. Remarkably, in many circumstances, CFD has even become the "preferred mode" of investigation due to extremely important advantages such as the possibility to set in a precise manner the boundary conditions (i.e. the interaction of the considered system with the external environment, Ferialdi et al. [91] and references therein). As a result of this tide, the fruits of the Boussinesq approximation are nowadays deeply embedded in the existing literature and consensus exists that the peculiar synergy between this paradigm and the numerical techniques illustrated in Section 3 has contributed significantly to shape our knowledge of this type of flow in its different variants and manifestations.
In order to provide evidence for these achievements and the related historical trajectory, in this final section, a few landmark numerical results are presented. The cited references (merely a selective sample of relevant works in the field of thermally-driven buoyancy convection) are restricted to the case of Newtonian fluids (for viscoelastic fluids, readers may consider the companion review by Lappa [92]). In particular, we focus on those studies on which many subsequent works have relied and/or very recent contributions. Moreover, although the cases with inclined temperature gradient have also attracted appreciable attention, due to page limits, we consider two fundamental situations only, namely, the two canonical configurations in which the imposed gradient is vertical or horizontal. Obviously, since a complete historical perspective can be found in specific books on the subject (e.g., Refs [9,10]), these results are presented under a different perspective, that is, emphasis is given to the numerical techniques by which they have been obtained.

Simulations conducted under the constraint of two-dimensionality
In this regard, it is worth starting from the relatively simple observation that in the early stages of development, this field has taken a peculiar path of evolution not because two-dimensional techniques (e.g., the ψ-ζ method) were leading to more interesting or valuable results, but simply because investigators were prevented from full exploiting 3D approaches due to the required computation resources (too demanding). This argument can indeed be used as an obvious justification for the abundance of ψ-ζ based results in existing benchmarks such as those by De Vahl Davis [93], De Vahl Davis and Jones [94] and Roux [95] and many other authors.
Although limited by the constraint of two-dimensionality, it is worth remarking that these efforts have led to significant advancements. As an example, it has been discovered that threedimensionality is not required to elucidate a very interesting property of dissipative (viscous) flows driven by gravity, i.e. their ability to produce multiple solutions. This specific aptitude has been initially verified for 2D finite-size geometries (rectangular cavities of given aspect ratio A = length/depth).
As an example, using a primitive-variables method Goldhirsch et al. [96] reported on complicated flow patterns and textural transitions in both non-chaotic and chaotic Rayleigh-Bénard (RB) convection together with multistability, i.e. the tendency of numerical simulation to converge towards different stable solutions for a fixed set of parameters (Pr, Ra and A) depending on the considered initial conditions. These findings proved that these systems can take many and complicated routes to attain equilibrium or chaotic states. Mizushima and Adachi [97] placed this problem in a more precise theoretical context. By means of linear stability analysis and numerical (non-linear) solutions based on the ψ-ζ method, these authors could determine precisely the related "bifurcation diagram", i.e. the set of parallel branches of solutions coexisting in the space of phases (and their dependence on the control parameter).
Yet, using the ψ-ζ method and a primitive-variables technique applied to the 2D case, Pulicani et al. [98] and Gelfgat et al. [99], respectively, illustrated that multiple solutions are not an exclusive prerogative of two-dimensional RB convection. Indeed, geometries as simple as rectangular enclosures can still produce alternate convective states "coexisting" in the space of parameters if they are differentially heated along the horizontal direction (Hadley flow) and specific values of the Prandtl number are considered (e.g., liquid metals for which Pr = O(10 −2 )).

Three-dimensional simulations
Extensions to the 3D case have shown that the situation can be made even more complex by the variety of possible secondary and higher-order modes of convection that are enabled when the third spatial direction is taken into account. For Rayleigh-Bénard convection and different values of the Prandtl number, many of these numerical investigations have been carried out in the framework of the toroidal-poloidal decomposition paradigm (generally, in combination with lateral periodic boundary conditions used to mimic infinite layers, Clever and Busse [100][101][102][103][104]). Some studies also exist where these higher-order states of convection were analyzed in finite-size geometries (laterally delimited by solid vertical walls); both because of the page limits and the amount of published literature on these topics, which is now enormous, in particular, in the remainder of this review, we limit ourselves to considering enclosures that have a parallelepipedic (or cubic) shape. Relevant examples along these lines are due to Nakano et al. [105] and Tomita and Abe [106] for a liquid metal and air, respectively (primitive-variable approach), and Bucchignani and Stella [107] and Stella and Bucchignani [108] for water (velocityvorticity method). More recently, Yigit et al. [109] have considered Pr = 1 and Pr = 320 (yet in the framework of a primitive-variables technique). Regardless of the used numerical strategy, these efforts have highlighted that the delicate evolutionary route to a final state can be coupled to significant and intriguing adjustments in the roll pattern inside the considered fluid domain.
Relevant 3D computations have also been conducted for circumstances where the imposed temperature difference is horizontal. While in the early stages of development of this field, 2D simulations were chosen primarily for numerical convenience, most of these subsequent studies for the Hadley cell were aimed at discerning 3D effects through comparison with computations initially conducted under the constraint of two-dimensionality. The works cited in the following should be seen as various examples of such a practice (or modeling/analysis hierarchy). By means of the primitive-variable approach, Fusegi et al. [110,111], Janssen and Henkes [112], Labrosse et al. [113], Trias et al. [114] and Gelfgat [115,116] considered 3D cavities filled with air; Henry and Buffat [117], Wakitani [118], Hof et al. [119] focused on the case of liquid metals. Although findings based on the velocity-vorticity method are relatively rare and sparse, some works based on this approach have also been produced (interested readers may consider, e.g., Hiller et al. [120] for a high-Pr fluid, Kowalewski [121] for the case of water and Bennett and Hsueh [122] for air).

Extension to the turbulent regime
Over the last 20 years, significant attention has been paid to extending all these studies based on the Boussinesq approximation to circumstances for which high-dimensional chaos or fully turbulent conditions are attained. Most notably, this endeavor has opened a new line of inquiry, which has not yet been fully explored, has instigated a significant amount of research and has led to interesting independent strategies of attack. Trying to filter out differences and concentrate on common aspects, these different variants (which also fit the aims and scope of the present review) can be grouped in three main categories.
The first possible approach is known as DNS, where this acronym stands for "direct numerical simulation". This may be regarded as a straightforward extension of the techniques described in Section 3 to regimes where the flow takes a turbulent behavior; the only additional requirement or precaution is represented by an adequate choice of space and time discretization steps (which must be sufficiently "small" to capture all the spatio-temporal properties of the turbulent flows on a wide range of scales, Kerr [123], Farhangnia et al. [124]). In particular, fundamental concepts that have significantly supported researchers' quest to define proper requirements in terms of numerical "resolution" are the so-called Kolmogorov length scale and the thickness of the thermal boundary layers.
The Kolmogorov length scale (η) is at the root of the theories elaborated by Kolmogorov [125][126][127][128], in which turbulence is assumed to take a universal (homogeneous and isotropic) behavior on a certain interval of scales (the so-called inertial range, Kraichnan [129]). In particular, this theoretical framework rests on the idea that in such interval turbulent kinetic energy cascades at a uniform rate until viscous (frictional) effects turn it into internal energy (this final conversion occurring at the η scale, which explains why η is also known as "the dissipative length scale"). From a theoretical point of view, η bounds the inertial regime from below; from a physical standpoint (i.e. taking a "spatial" perspective) it may be regarded as the size of the smallest "eddies" present in the considered flow. This interpretation explains why existing efforts based on DNS have typically used η as a reference value to define the required spatial resolution. On the basis of dimensional analysis, the non-dimensional η can be defined as: where L is a reference length and ε is the energy dissipation rate. Moving beyond the theoretical, it is worth noticing that over recent years, useful (practical) correlations have been introduced in the literature by which this quantity can be quickly estimated using the characteristic numbers on which thermal convection depends. These relationships differ remarkably according to the specific type of flow considered (depending on the relative orientation of gravity and temperature gradient). For Rayleigh-Bénard convection (see, e.g., Kerr [123], De et al. [130]) it is: Other possible numerical approaches to simulate turbulent flows have been elaborated with the specific intent to mitigate the otherwise prohibitive computational resources required by DNS when the Rayleigh number takes high values (as the reader will easily realize by taking a look at both (43) and (44), η → 0 in the limit as Ra → ∞ regardless of the relative orientation of the temperature difference with respect to gravity). These alternate strategies have been designed with the explicit intent to take advantage of some general or universal properties of turbulence (such as the aforementioned tendency to display an isotropic behavior on small spatial scales). Moreover, they have directly contributed to provide new useful information on the physics of these processes. In this context, it is worth noting that, because of the difficulties in elaborating relevant concepts for derived quantities such as vorticity or other variables with less immediate physical interpretation (i.e. the potential vector), the overwhelming majority of such turbulence models have been developed in conjunction with primitive-variables-based techniques (which explains to a certain extent why this specific category of methods has emerged as a leading candidate to analyze both laminar and turbulent flows over the last 20 years). In particular, two main strategies have been successfully implemented for turbulent flows and led to a true "dichotomy" in the literature.
These two alternative paradigms have been obtained by filtering the equations in time or in space.
In the first case, known as RANS, i.e. Reynolds-averaged Navier-Stokes approach, the velocity is split in a time-averaged and fluctuating component. By substituting the decomposed velocity (made of these two terms) into the original balance equations and taking the time-averaged value of each term, an equivalent set of equations is formally obtained where all terms depend on the time-averaged velocity only, with just one exception, that is the so-called Reynolds term. From a physical point of view, this term represents extra stresses produced by the instantaneous fluctuations in the velocity field which contribute to increase the exchange of momentum; from a mathematical point of view, it represents an "issue", i.e. a proper closure model must be introduced by which these additional stresses can be determined as a function of the timeaveraged velocity. A classical approach to do so is the so-called k-ε two-equation model, where the Reynolds stress tensor is linked to the gradient of time-averaged velocity via a "turbulent viscosity", which in turn is expressed as the ratio between the square of the turbulent kinetic energy (k) and the energy dissipation rate.
Over the years, this specific class of methods has been tuned to take into account some specific aspects of thermal buoyancy, especially those which cause a departure from the assumption of isotropy of the turbulence or require modeling the interactions between the fluctuating velocity and temperature fields (the reader being referred to the useful discussions available in the works by Davidson [132] and Hanjalic and Vasic [133]).
Given its intrinsic nature, this paradigm has proved to be a viable strategy especially in cases for which the turbulent fluid motion is steady "in mean", this being especially the case of the Hadley convection (horizontal temperature gradient) in shallow, square (or cubic) or tall cavities, for which the flow typically develops a single cell and turbulence consists of a set of oscillations with different frequencies superimposed on such a "basic" state (Dol et al. [134], Liu and Wen [135], Dol and Hanjalic [136], Hsieh and Lien [137], Altaç and Ugurlubilek [138]).
An alternate modus operandi exists. Known as LES (Large Eddy Simulation), this independent strategy is based on a "more spatial" perspective. It relies on the concept or idea, that the contribution of the large-scale energy containing flow should be computed directly, while the small-scale effects that are supposed to be isotropic and universal, (the phenomena occurring in the aforementioned inertial range), can be "modeled". By doing so, the solver can effectively be relieved from the burden to account for such effects numerically. However, the price introduced by this coarse-graining process is that correlations are needed between resolved and unresolved terms. In other words, (just like the RANS) this paradigm calls for a proper "closure model", which in this case should be seen as a way to satisfy the need to account for the effect of the small spatial scales on the large spatial scale. A first implementation of this way of thinking is due to Smagorinsky [139], where a simple account for all the unresolved small scales was provided through the definition of a single parameter referred to as the "subgrid scale viscosity", namely an extra viscosity to be added to the molecular kinematic viscosity in the balance equations.
Smagorinsky [139] introduced a direct relationship between this extra viscosity and the size (space integration step) of the considered mesh and the flow strain rate. Later, such a model was extended to circumstances where thermal effects play a significant role and "corrected" for the case of natural convection where the flow is obviously produced by buoyant forces and not via a purely dynamic forcing. These concepts were initially elaborated by Lilly [140] through the introduction of a "subgrid thermal diffusivity", playing in the energy equation the same role that the subgrid viscosity has in the momentum equation. Mason [141] revised both these quantities in order to take into account specific buoyancy-induced effects (mixing due to statistically unstable conditions). More recently, other researchers have contributed to increase the complexity of these models by incorporating more "physics". Given the page limits, here we simply limit ourselves to recalling that, in general, LES has proved to be a more relevant choice, especially when phenomena of vortex coalescence and splitting become pervasive throughout the computational domain.

Conflicts of interest
The author has no conflict of interest to declare.