Evaluating a distance function

Computing the distance function to some surface or line is a problem that occurs very frequently. There are several ways of computing a relevant approximation of this function, using for example technique originating from the approximation of Hamilton Jacobi problems, or the fast sweeping method. Here we make a link with some elliptic problem and propose a very fast way to approximate the distance function.


Introduction
In many cases, one has to evaluate the distance function to a surface Γ D which is part of the boundary of an open set Ω ∈ R d . An example in fluid mechanics is that of turbulence modelling: in some models, one of the parameters in the evaluation of the turbulent viscosity is the distance to the airfoil. Other examples can be found in medical image processing, surface reconstruction, etc. To evaluate the distance function, there are many ways. One technique is to numerically evaluate the viscosity solution of the Eikonal equation, In this formulation, only one boundary condition is prescribed, the Dirichlet one on Γ D , nothing is said for the other parts of ∂Ω. Numerical techniques for this can be found in [12,1]. This can be improved by using a fast marching technique in the spirit of [13], or technique coming from computer sciences. From time to time, one can see in the literature methods that compute an approximation of the distance function as the solution of some Laplace equation, an example can be found in [11] and the reference therein. Other references, where elliptic problems are considered, can be found in [5,3]. They solve (1) by minimizing ∇u − 1) 2 using a variational formulation with continuous or discontinuous finite element that needs to be penalized to enforce the Dirichlet boundary conditions. It is interesting to notice that their formulation is relatively close to ours, using completely different paths.
There is no obvious links between (1), which is of hyperbolic nature, and an elliptic problem. The purpose of this small note is to provide a link, via the Hopf-Cole transform, and to provide a very fast algorithm (compared to explicit algorithms for computing the viscosity solution of (1) to evaluate the solution of (1)), at least if one accept a small viscosity term (which will nevertheless be present in any numerical method of PDE origin), and discretisation errors.
The format of this note is the following. First, we recall the Hopf-Cole transform and show how it can be applied to the steady Eikonal equation. This leads to an elliptic problem on a function constructed from the distance. We discuss the boundary conditions for this problem, and in particular for the part of ∂Ω which is not Γ D . Then we provide a numerical method, and show its behaviour on unstructured triangular meshes.

The problem
We want to solve the following problem: Let Ω ⊂ R n be open, and we set ∂Ω = Γ D ∪ Γ S , Γ D ∩ Γ S of empty interior. We want to compute the distance function to Γ D . We consider the problem of finding the viscosity solution of Of course Γ S can be empty, but Γ D is never empty by assumption. On Γ S we have set Soner type boundary condition, see [4] for example. For the sake of completeness, we recall the notion of viscosity solution for (2): Let ϕ ∈ C 1 (Ω), and x 0 a point where u − ϕ reaches a local minimum: Here we propose a method where we solve a viscous regularisation of (2), or more precisely of since the two problems have the same solutions, we consider, for ν > 0, the problem (in the viscosity sense, see [4] for second order problems) 3 Rewriting the problem If, instead of looking at the steady problem (4), we consider the unsteady one, ∂u ∂t + ∇u 2 − 1 = ν∆u with the same initial and boundary conditions, this is "almost" the viscous Burgers equation, for which it is very well known that we can transform it into the heat equation by using the Hopf-Cole transform, The proof is classical (though done in one dimension in most textbooks), but we nevertheless repeat it. Our notations will be: ∇u represents the first derivative of the function u: for any h ∈ R d and D 2 u represents the second derivative (the Hessian) of u: With this in mind, we have, from (5), that and Hence plugging this into the Burgers equation, we have so that in the end we see that ϕ needs to satisfy with ϕ = 1 on the Dirichlet boundary and ϕ ≥ 0 on Ω. Unfortunately, the time dependant problem ∂u ∂t + ∇u 2 − 1 = ν∆u does not go through as well, but this is not an issue because this is not the problem we want to solve. We want to solve ∇u 2 − 1 = ν∆u for which we again use the change of variable with α to be determined. We get so we take α = −ν and we need to solve in Ω with the boundary condition ϕ(x, t) = 1, On Γ S , inspired by what is done for the inviscid problem, and using (6), we see that a condition is on Γ S . This looks a bit like an obstacle problem, but this is not exactly the same (because the "obstacle" is at the boundary). In the next section, inspired by what is done for the Eikonal equation, we will propose a discretisation of this kind of condition. In the numerical section, we will also compare this boundary condition with more natural ones, such as a Neuman condition on the distance function, on Γ S . 4 Numerical discretisation

Formulation
We consider a triangulation of the polygonal domain Ω that respects Γ D and Γ S . They consists of triangles (or tetrahedrons) that are generically denoted by K. The vertices of the triangulation are denoted by a i , i = 1, . . . , n s . The number of element is n e . For any vertex a i , V(j) is the set of vertices connected to a i by an edge of the triangulation. Often, we make the identification between a vertex a i and its index i. For the sake of simplicity we only consider the two dimensional case, the three dimensional one can be done in a similar way. The approximation space is We write the problem as: coupled to boundary conditions on Γ S . If Γ S = ∅, there is nothing more to do. In the case when Γ S = ∅, we define the boundary conditions according to what is done for the Eikonal equations, see for example [2]. There is defined a numerical Hamiltonian H which role is to translate the viscosity inequality ∇u − 1 − ν 2 ∆u ≥ 0 on Γ S in the limit ν → 0. There are several possible versions, but the best (because the gradient of the numerical solution is controlled, see [2]) is to use a Godunov Hamiltonian which amounts to write for any vertex a i ∈ Γ S , that max j∈V(i) Keeping in mind that the solution of (9) is related to the solution of (4) by d = −ν log ϕ, we will consider the following implementation for the Soner boundary condition: for a i ∈ Γ S ,

Numerical procedure
We use the following notations. A triangle K has 3 vertices, denoted by a i , a j , a k . We assume that the elements are oriented positively. The gradient of the basis function, θ i associated to the vertex a i is where, for any vector x, x ⊥ is orthogonal to x such that the basis (x, x ⊥ ) is direct. As usual, the angle at a i in K is denoted by α K i . The variational formulation in Ω leads to with the boundary condition ϕ i = 1, for any a i ∈ Γ D (10b) and (9b) on Γ S when this set is not empty. Here, M is the mass matrix and R the rigidity matrix, If Γ S = ∅, this can be solved by an iterative or a direct solver. Here we have chosen the direct solver PastiX [9]. If Γ S = ∅ the problem becomes non linear. In that case we use an Uzawa-type procedure: we construct a sequence of functions by initialising with ϕ 0 = 1 on Ω, and from ϕ n we construct ϕ n+1 by setting: 1. We compute ϕ n+1 solution of (10a) with the Dirichlet boundary ϕ n+1 = 1 on Γ D and ϕ n+1 = ϕ n on Γ S .
2. Then we set It is well known that for P 1 approximation on triangular elements, the contribution of K for R is Since cot α + cot β = sin(α+β) sin α sin β , and since α K i + α K j + α K k = π, the diagonal terms are positive 1 . The term R ij for two adjacent points is, since a i and a j defines the common edges between two triangles K + and K − , and it is known that R ij ≤ 0 for i = j if and only if see figure 1.  (12), we see that 0 ≤ min

a quicker way to see this is to write
Similarly, we can show that the sequence is monotone increasing, and since ϕ n ≥ 0, it is convergent. The monotone increasing nature comes from ϕ 1 ≤ 1 = ϕ 0 and then we proceed by induction. We have (using the discrete maximum principle thanks to the condition (13)) that ϕ n+1 ≥ ϕ n and then using (12), we see that ϕ n+1 ≤ ϕ n+1 on Γ S . We have thus shown: If the variational formulation of the Laplace operator satisfies a maximum principle, the sequence (ϕ n ) n∈N converges. This is in particular true is the triangulation satisfies the angle condition (13).

Numerical examples
All the calculations have been done on an Imac with 3.5GHz Quad-Core Intel core i7 processors with the version 6.0.2 of PastiX [9]. To report the performance of the solver, for a mesh with 295 296vertices, 587 520 elements generated by GMSH [7] using the frontal Delaunay option, the symbolic factorisation takes 0.47 s, the evaluation of the non zeros entries of the matrices and the second hand side takes 0.11 s, and the solution takes 9.16 seconds. The averaged maximal band-with of the matrix was 171 781, its maximal band-with is 292 530. The computations have been done sequential. We do only calculations when Γ S = ∅ because they are a priori more complicated. The symbolic factorisation is done once for all (for a given mesh) if the Uzawa-type method is needed. The first test is the evaluation of the distance function on the annulus {x, 1 ≤ x ≤ 2} The viscosity is set to ν = 0.1. The Dirichlet condition is set for the inner circle, and the Soner condition on the outer circle. The solution is displayed on Figure 2-a, while the error to the true solution is on Figure 2-b.
The second case is that of the distance to a body made of two NACA airfoils. The Dirichlet condition is set on the airfoils, and the Soner one on the outer boundary. The results are displayed on figure 3 as well as the mesh. Here again, ν = 0. 1 We also have considered a case where Γ D and Γ S are not disjoint. The example under consideration is We take We take ν = 0.01. We provide the result on a fine mesh (151 713 points and 301 568 elements) on figure 4. In figure 5, we compare the iterative convergence of the algorithm for several meshes (2473, 9657 and 151 713 vertices). We observe that we need 65 iterations for the coarse mesh, 130 for the medium one and 485 for the fine one. The ratio of mesh points w.r.t. the coarse one is 1:4:61, and for the iteration the ratios 1:2:7.5, so we see that the cost evolves like h −1 (the mesh is very regular). For a classical explicit hyperbolic solver, the cost scales like the number of points, so h −2 . In this case, and others that we have computed (such as the airfoil case, the conclusion is similar, or even better.
To end this paragraph, let us comment a bit on the boundary condition of Soner type. Since we expect that approximation of the distance has a gradient of norm approximately equal to unity, one may wonder why a Neuman type boundary condition on the distance, say ∇u · n = 1, would not fit. Indeed, this has been our starting point for imposing boundary conditions on Γ S .
Thus, to set ∂u ∂n = 1, we are let to Robin type conditions on ϕ. We have compared our formulation and this one on the fine mesh of the previous test case to the exact solution where, if x = (x, y), and θ is the arclength on the inner circle between p and (1, 0). The results are displayed on figure 6 with ν = 0.01. From the figure it is clear that the solution with the Neumann condition is completely off compared with our method, though this one is not perfect (because of ν not small enough). However, the solution with the Robin condition could be used as an initial guess, this has not been done in our code.

Conclusion
This paper is dedicated to Roland Glowinski. He always have been very nice to the author. Roland has also worked a lot on Hamilton Jacobi equations, for example in [10,8,6]. This plus the handling of [11] as editor of JCP has motivated the present paper. A final remark is that it is certainly possible to establish rigorous error bounds between the distance function and what is computed here using approximation results between the viscous regularisation of Hamilton Jacobi equations and standard L ∞ error estimates on P 1 approximation of elliptic equations. This has not been done here because we were motivated by designing a working algorithm.
This algorithm has its own drawbacks. The first one is that when ν becomes very small, the problem becomes stiffer and stiffer. When the domain is large, the actual value of the solution of the elliptic problem becomes extremely small. It is interesting to note links with large deviation problems (see the last chapter of [4] where exactly the same PDE is studied, for completely different reasons). However, if one comes back to the initial motivation of this work (finding the distance function for turbulence modelling), our experience is that the computation close to the Dirichlet boundary is very reliable, and that applying the Dirichlet condition on all boundaries is enough to get a good approximation.
When the mesh is too distorted so that a discrete maximum principle does not apply, the solution can be slightly above 1 (so that the 'distance' would be negative): in that case, the solution provided by this method can be used as a good initial condition to a 'traditional ' Hamilton Jacobi problem.      Figure 6: (a): isolines for the exact solution (red) and the solution with the Hopf-Cole transform, (b): isolines for the exact solution(red) and the solution with Robin boundary conditions (black). The background colour is that of the exact solution, in both case we have 30 isolines between 0 and u = 3.826 (the maximum value of the distance on this domain). The solution with the Neumann condition get values larger than 3.9, this explains why the isolines stop