Home > Calculus of Variations, Numerical Analysis > Numerical Approximation using Relaxed Formulation

## Numerical Approximation using Relaxed Formulation

Sometimes it is easier to replace an optimization problem with a sequence of relaxed problems whose solutions approximate the solution to the initial problem.

This kind of procedure can be useful when we need to approximate numerically discontinuous functions (in particular the characteristic function). Modica Mortola theorem states that the functionals

$\displaystyle F_\varepsilon (u) = \begin{cases} \varepsilon \int_D |\nabla u|^2+\frac{1}{\varepsilon} \int_D W(u) & u \in H^1(D) \\ \infty & \text{ otherwise} \end{cases}$

${\Gamma}$-converges to the functional

$\displaystyle F(u) = \begin{cases} \text{Per}(\{u=1\}) & \text{ if }\{u=1\} \text{ has finite perimeter} \\ \infty & \text{ otherwise} \end{cases}.$

(Recall that ${W}$ is a real function which is positive except for ${0}$ and ${1}$ where it is zero.)

The properties of ${\Gamma}$ convergence imply that if we find minimizers for ${F_\varepsilon}$ with ${\varepsilon}$ small we get near to the minimizers of ${F}$. Intuitively you can see that the second part of ${F_\varepsilon}$ forces the minimizer to be near ${0}$ and ${1}$, while the first part penalizes the “length” of the set where the function “jumps” from ${0}$ to ${1}$.

The first and the simplest problem that can be attacked like this is the isoperimetric problem. In that case the only thing that is to be done is to find the minimizers for ${F_\varepsilon}$ directly keeping in mind that we must impose a volume constraint. The optimization is done in this case using an optimized gradient descent algorithm (LBFGS). The initial condition is chosen random (to let the algorithm the freedom to find the best optimizer it can) and multiplied with a constant such that its volume is ${V \in (0,1)}$. We work on an uniform grid on the unit square ${[0,1]\times [0,1]}$. For the gradient approximation we use finite differences and we approximate the integral by the mean of the values in the discretizatioin points. To preserve the volume it is enough to project the gradient at each iteration on the hyperplane ${X_1+...+X_{n^2}=0}$. Here are the results obtained: (note that the results were obtained for different values of $\varepsilon$, and the value of $\varepsilon$ determines the width if the interface region – the yellowish part in the last figure. If $\varepsilon$ is small enough, the interface part becomes invisible, and the result is virtually a characteristic function)

Edouard Oudet has used the Modica-Mortola theorem to build a framework in which one could study numerically by relaxation the problem of finding the partition which minimizes the sum of perimeters of all its parts. It has been proved recently by Thomas C. Hales that the optimal partition of the plane into sets of equal volume in the sense that this partition has the least perimeter cost is the partition into regular hexagons. It is no wonder that bees construct hexagonal honeycombs in order to optimize the time and material used.

The relaxed problem in the partitioning case is the following:

Multi-Phase Modica-Mortola Consider ${D \subset \Bbb{R}^N}$ a bounded open set and for ${n \geq 2}$ consider the space

$\displaystyle X=\{(u_i)_{i=1}^n \subset L^1(\Omega)^n | \sum_{i=1}^n u_i=1\},\ \int_D u_i=|D|/n\}$

and define

$\displaystyle G_\varepsilon(u)=\begin{cases} \sum_{i=1}^n F_\varepsilon(u_i) & u \in (H^1(D))^n\cap X \\ +\infty & \text{otherwise} \end{cases}$

$\displaystyle G(u) =\begin{cases} \sum_{i=1}^n \text{Per}(\{u_i=1\}) & u \in BV(D,\{0,1\})\cap X\\ +\infty &\text{otherwise}. \end{cases}$

Then ${G_\varepsilon}$ ${\Gamma}$-converges to ${G}$.

Note that this is not an obvious consequence of the Modica-Mortola theorem. While the first part, the ${\liminf}$ inequality is a direct consequence, the ${\limsup}$ part is not, and if we consider the recovery sequence ${u_i^n}$ for the the Modica-Mortola Theorem for ${u_i=\chi_{\Omega_i}}$ then considering ${u_n=(u_i^n)}$ we don’t obtain necessarily a recovery sequence, since ${(u_i^n)}$ may not satisfy the sum constraint. So the challenge here is to find a suitable recovery sequence, and this is possible (with some care) using some similar ideas used in the proof of the ${\limsup}$ part of the Modica-Mortola theorem using the signed distance function to each set of the partition. To see more details and results regarting the partitioning problem, you can take a look on the website of Edouard Oudet.