Home > shape optimization > Shape Optimization Course – Day 4

Shape Optimization Course – Day 4


Here I will present the contents of the fourth day of the Shape Optimization Course held in Timisoara, Romania in November 2011. The last two lessons were about the numerical computations used in shape optimization. Since I am not a very good numeric analyst, I may have not understood well all that was said in these courses… Therefore, if you find some mistakes they are not due to the speaker, but due to my misunderstanding.

Speaker: Edouard Oudet

When we are talking about shape optimization, we want to find a shape which satisfies an optimality condition. Usually this condition is written as a minimum or maximum problem. For this, we need a functional

\displaystyle F: \mathcal{P}(\Bbb{R}^N) \rightarrow \Bbb{R},

which attaches to every subset {\Omega} of {\Bbb{R}^N} a value {F(\Omega)}. We can then define our problem: search if there is a shape {\Omega^*} such that

\displaystyle F(\Omega^*)=\inf_{\Omega \in \mathcal{A}} F(\Omega),

where {\mathcal{A}} is an admissibility class, for example {\mathcal{A}=\{ \Omega : |\Omega|=1\}}, {\mathcal{A}=\{ \Omega \subset D : \Omega \text{ open}\}}, {\mathcal{A}=\{ \Omega : \Omega \text{ convex}\}}, etc.

The first aspect we are concerned of is the existence of the solution to this problem. After proving that the solution exists, we may ask ourselves which is this solution? Is it a ball, is it not a ball, is it a regular shape? Since sometimes the class of admissible shapes is very large (measurable sets, disconnected sets), our solution may be irregular, and our intuition may not show us neither the right optimal shape nor the properties of this shape (symmetry, conectedness, etc). This is where numerical analysis comes into play.

In standard numerical optimization we are concerned with functions {f:\Bbb{R}^N \rightarrow \Bbb{R}}, and we try to find the solution, if it exists, to problems of the type

\displaystyle \inf_{x \in C} f(x).

One method to solve this kind of problems is the gradient method, which is to start from a chosen {x_0} and construct {x_n} such that {f(x_n)} decreases in the following way

\displaystyle x_{n+1}=x_n-\alpha \nabla f(x_n)

where {0 <\alpha <<1} is a small parameter which should be well chosen. Note that the method isn’t guaranteed to give the right result for every function {f} and for every starting point {x_0}. If we know that the function {f} has a unique local minimum, then the gradient method should converge to the desired result, but if {f} has more local minimums and {x_0} is chosen near an “upper” local minimum, then the gradient method won’t give the global minimum.

What is the difference between numerical optimization and numerical shape optimization? Well, first difficulty is defining the gradient of the shape function {F}We have seen that there aren’t any natural topologies on the family of domains, and the choice of a good topology is essential for the existence result. But even a good topology for the shapes does not turn the space of the shapes into a normed vector space so that we can easily define the gradient of {F}. This leads us to the second difficulty: if we want to devise a “gradient method” for shapes then how we could derive the next shape from the actual shape, since we cannot add or subtract shapes?

As an example, let’s look at the problem of minimizing {\lambda_k(\Omega)} for domains {\Omega} with fixed volume, for example {|\Omega|=1}, where {\lambda_k(\Omega)} are the eigenvalues of the problem

\displaystyle \begin{cases} -\Delta u=\lambda u & \text{ in }\Omega \\ \hfill u=0 \hfill & \text{ on }\partial \Omega \end{cases}

It is well known that for every {\Omega} there is a sequence of eigenvalues

\displaystyle 0\leq \lambda_1(\Omega) \leq \lambda_2(\Omega) \leq ...

such that {\lambda_k(\Omega) \rightarrow \infty}.

It would be interesting to know what are the optimal shapes for {\lambda_1,\lambda_2}, etc, but it is surprising to find out that very little is known here. For {\lambda_1} Faber and Krahn have proved that {\lambda_1(\Omega) \geq \lambda_1(B)} where {B} is a ball with the same volume as {\Omega}. For {\lambda_2} Polya and Szego proved that the optimal shape is the union of two balls with the same radius.

Surprisingly, for {k \geq 3} nothing is known, in the sense that there is no proof as to what the optimal shape must look like, there is no well defined shape that is proved to be optimal. Here, where we know nothing about the exact form of the optimal shape (we know that it exists, that’s a fact…), the necessity of developing numerical tools to attack shape optimization problems becomes evident.

For this you can check out the work of Edouard Oudet and more recently Pedro Antunes and Pedro Freitas who give candidates obtained by numerical analysis for the first few eigenvalues.

Let’s pass now to another problem which is about partitioning the space with optimal sets in the sense that each set of the partition has fixed measure, and we want to minimize the sum of the perimeters (asymptotically, of course).

In the plane the problem was known as the Honeycomb Conjecture, which says that in the plane the optimal partition into sets of equal area which minimizes the total perimeter is asymptotically a partition in hexagons, which resembles the shape of a honeycomb. This conjecture was inspired, as the title says, by the fact that if the bees, which are very efficient workers, choose that pattern it should be very close to an optimal pattern, because it saves both time and material to build. In nature, often optimum tends to be achieved…

This is not a conjecture any more since in 2001 it was proven by Hales. However in the 3 dimensional case things are not so simple, since we cannot partition the space into pieces having the same form. Kelvin’s conjecture proposed such a tilling which was then improved by Weaire and Phelan. You can also see the work of Edouard Oudet on this subject here. The main tool used is a variant of Modica Mortola theorem.

How does a numerical algorithm for finding the optimal shape for {\lambda_1} (for example) look like? Here are some possible steps:

  • Start from a shape {\Omega_n}.
  • Compute {\lambda_1(\Omega_n)} and its corresponding eigenvector {u_1}.
  • Compute the gradient of {u_1} along {\partial \Omega_n}.
  • Define a perturbation {V} of the boundary {\partial \Omega_n} which decreases {\lambda_1}.
  • The new set is {\Omega_{n+1}}.

This is called the boundary variation method. One of the problems with this method is that it preserves the topology of the domain (connectedness, number of connected components), and therefore the starting point must be good in order to reach a good result.

For larger eigenvalues, numerical approximation becomes more tricky. It is known that the eigenvalue is differentiable if and only if the eigenvalue is simple. Surprisingly, numerical tests suggest that every optimizer for {\lambda_k(\Omega)} with {k \geq 2} corresponds to a non-simple eigenvalue. Even though numerical tests suggest that, no proof of this fact is known.

A problem similar to the honey comb problem is the problem of optimal partitions which minimize the sum of corresponding eigenvalues. For example

\displaystyle F(\Omega_1,...,\Omega_n)=\sum_{k=1}^n \lambda_1(\Omega_k),

where {n} is a fixed positive integer, {D} is given and {\Omega_1,..,\Omega_n} form a partition of {D}.

This problem was studied by D. Bucur, E. Oudet and B. Bourdin. Existence results and numerical computations can be found here. It is conjectured that asymptotically the solution behaves as in the honey comb conjecture, i.e. assymptotically the optimal partition for the first eigenvalue is the one in regular hexagons.

  1. No comments yet.
  1. No trackbacks yet.

Leave a comment