Simulation of the isoperimetric inequality using a phase field approximation.

The isoperimetric inequality is the most emblematic problem in shape optimization. It reads as follows :

Among all the shapes of a given area, what is the shape that minimizes its perimeter ?

Someone a long, long time ago
The answer is know from a long time to be the circle. However, it needed a few millenia for the mathematicians to give a rigorous proof. A nice overview of the isoperimetric problem, written by Mark Ashbaugh, can be found here. Some proofs and insight must be found in the blog of Beniamin Bogosel. I assume here that you are a bit familliar to this problem.

In this article, I want to provide a simple code written exclusively in FreeFem in order to illustrate the isoperimetric problem and the use of the so-called phase field method. Let first clearly state the problem : let $D \subset \mathbb{R}^n$ be a "working domain", here the open unit hypercube. We want to find $\Omega \subset\subset D$ (compact inclusion) which is solution to the problem $$ \min_{Vol(\Omega)=m}{Per(\Omega)} $$ with $m > 0$. Remark that the set $\Omega$ can equivalently be represented by its characteristic function $\chi_\Omega$. The phase field method consists in replacing such domains $\Omega$ with functions $u: D \to \mathbb{R}$ that can takes other values than $0$ and $1$. Hence the space of functions to consider still need to contain characteristic functions of smooth enough domains. One natural space to consider is the space of functions of bounded variations, noted $BV(D)$, consisting of $L^1$ function for which the weak derivative is a Radon measure. It can be seen roughly as functions that are piecewise $H^1$. While quite complicated, it presents nice propreties of compacity which allows to use standard methods of calculus of variations. An instersting thing about this space is that we can define the total variation of the function $u$ as $$ |Du|(D) = \sup\left\{\int_D u \operatorname{div} \phi : \phi \in C^1_c(D, \mathbb{R}^n), \|\phi\|_\infty \leq 1\right\} $$ and it turns out that if $u=\chi_\Omega$ where $\Omega$ has a smooth enough boundary, we have $$ |D\chi_\Omega|(D) = Per(\Omega). $$

Why is it useful tho ? In fact, we just generalized the notion of perimeter of a shape into the notion of total variation of a function. In shape optimization, it is often fruitful (and sometimes inevitable) to reformulate the original optimization problem in a broader sense. This is for instance the case for the homogenization method in conductivity or elasticity, or the porous media approach in fluid mechanics.

Our new problem is now the following : find $u\in BV(D)$ such that $$ \min_{\int_D u=m}{|Du(D)|}. $$ As interesting as it is from a conceptual point of view (this reformulation allows notably to handle the complex question of the existence of an optimal isoperimetric set), it can not be treated numerically as it as long as we don't know how to approximate the $BV$ space by finite-dimensionnal spaces. This is where the Modica-Mortola functionnal comes into play. Define $$ F_\varepsilon(u) = \begin{cases} \int_D \varepsilon |\nabla u|^2 + \frac 1 \varepsilon u^2(1-u)^2 \mbox{ if } u \in H^1(D), \int_D u = m \\ +\infty \mbox{ otherwise } \end{cases} $$ and $$ F(u) = \begin{cases} Per(\Omega) \mbox{ if } u \in BV(D,\{0,1\}), \Omega = u^{-1}(1), Vol(\Omega)=m \\ +\infty \mbox{ otherwise } \end{cases}. $$ Then the following convergence holds $$ F_\varepsilon \xrightarrow[\varepsilon \to 0]{\Gamma} F $$ in the sense of $\Gamma$-convergence (in fact there is a constant factor that we don't consider here). Intuitively, the term in $\frac 1 \varepsilon u^2(1-u)^2$ will force the function to be either $0$ or $1$ in $D$ while the term in $\varepsilon |\nabla u|^2$ will count for the perimeter of the "jumps" of $u$.

The notion of $\Gamma$-convergence implies that for a small $\varepsilon$, the minimum of $F_\varepsilon$ is close to the one of $F$. Thus, we can approximate our previous problem by the following one : find $u \in H^1(D)$ solution of $$ \min_{\int_D u=m}{\int_D \varepsilon |\nabla u|^2 + \frac 1 \varepsilon u^2(1-u)^2}. $$ Now that our function space is $H^1(D)$, we can use finite element discretization to approximate it with a finite-dimensional space $V_h = Span(\psi_1,...,\psi_n)$ where the $(\psi_i)_i$ are basis functions. If $\bar{u} = (u_1,...,u_n)$ designates the components of $u = \sum u_i \psi_i$, the problem is now a finite-dimensional optimization problem in $\mathbb{R}^n$ under an equality constraint. In order to make the code as easy as possible, we will transform our problem a bit further in order to consider an unconstraited optimization problem by replacing the volume constraint by a penalization. It reads as follows : Find $\bar{u} \in \mathbb{R}^n$ solution of $$ \min_{\bar{u}\in \mathbb{R}^n}{\int_D \varepsilon |\nabla u|^2 + \frac 1 \varepsilon u^2(1-u)^2 + \alpha u} $$ where $\alpha < 0$. To perform the optimization, we can then use a standard gradient descent algorithm. By denoting $$ J : \bar{u} \mapsto \int_D \varepsilon |\nabla u|^2 + \frac 1 \varepsilon u^2(1-u)^2 + \alpha u $$ we can compute for $\bar{v} \in \mathbb{R}^n$ (with $v = \sum v_i \psi_i$) the differential of $J$ : \begin{align*} dJ(\bar{u}).\bar{v} &= \int_D 2\varepsilon \nabla u \cdot \nabla v +\frac 2 \varepsilon u(1-u)(1-2u)v + \alpha v\\ &= \sum v_i \int_D 2\varepsilon \nabla u \cdot \nabla \psi_i +\frac 2 \varepsilon u(1-u)(1-2u)\psi_i + \alpha \psi_i. \end{align*} Thus the gradient of $J$ reads : $$ \nabla J(\bar{u}) = \left( \int_D 2\varepsilon \nabla u \cdot \nabla \psi_i +\frac 2 \varepsilon u(1-u)(1-2u)\psi_i + \alpha \psi_i \right)_{1 \leq i \leq n}. $$ It is then easy to implement a gradient descent algorithm to make $J$ decrease.

Implementation

The code is given in this repository. Clone it on your computer and then run
> FreeFem++ isoperimetric.edp
to launch the simulation. You will need MEDIT in order to vizualize the results in the res/ folder. Alternatively, you can replace the medit call in the code by the FreeFem plot function. In the given code, the gradient can be quickly assembled using the following lines :

                    
varf dPer(v,w) = int2d(Th)( 2*eps*(dx(u)*dx(w)+dy(u)*dy(w)) 
                 + 2/eps*u*(1-u)*(1-2*u)*w + lambda*w)
                 + on(1,2,3,4,v=0);
gradPer[] = dPer(0, Vh);
One minor implementation point is the fact that if the objective function $J$ decreases, then the step size is increased by a factor of $1.1$. If it increases, the phase field function $u$ isn't updated and the step size is multiplied by $0.6$. This can be consider as some basic line search procedure. Also note that as the time passes, for small $\varepsilon$, the convergence to a disk seems to become slower and slower. This is a well-know behaviour to a closely related equation, namely the Allen-Cahn reaction-diffusion equation. You can find a nice video by Nils Berglund where the time accelerates as the number of iterations steps increases. In our case, it may need a lot of iterations to get close to a disc.

If everithing went OK, you should see something similar to this :

At least I hope so. As you can see the convergence becomes reeeeeaaaaaallyyyyy slow.
Don't hesitate to play with the parameters (mesh size, epsilon, step size etc). If you use MEDIT for vizualization, you can produce the previous type of video by executing
> medit res/step -a 0 1000
then right click on the medit window > Toggle ImSave then Play Sequence. It will create a sequence of images in the res/ folder that you can concatenated into a video by executing
> sh video.sh
(you will need ffmpeg).

Finally, by starting with a large $\varepsilon$ and making it go smaller and smaller each time the optimization has converged, we can approximate the isoperimetric problem. This should look like the following animation :

Once again, don't hesitate to contact me if you have any question/issue/remark with this simulation!

Softwares :
Repositories :
Useful links :