Master 6


(For the context see the Shape Optimization page where you can find links to the first 5 parts)

A particular consequence of the Modica-Mortola Theorem is that the functional

\displaystyle \mathcal{F}(E_1,E_2)=\sigma \text{Per}_\Omega(\partial^* E_1 \cap \partial^* E_2)

is lower semicontinuous with respect to the {L^1(\Omega)} convergence for {\sigma>0} on the set

\displaystyle \mathcal{K}=\{ (E_1,E_2) : E_1\cup E_2=\Omega,\ E_1\cap E_2=\emptyset, |E_i|=c_i>0\}

where the equalities are, as usual, up to a set of measure zero. It would be nice if a similar result would be true for multi-phase systems, where a functional of the form

\displaystyle \mathcal{F}(E_1,E_2,...,E_k)=\sum_{1\leq i<j\leq k}\sigma_{ij}\text{Per}_\Omega(\partial^* E_i \cap \partial^* E_j)

is a {\Gamma}-limit and therefore semicontinuous, for {E=(E_i) \in \mathcal{K}} where

\displaystyle \mathcal{K}=\{ (E_1,...,E_k) : \bigcup_{i=1}^k E_i=\Omega,\ E_i\cap E_j=\emptyset, \text{ for }i\neq j, |E_i|=c_i>0\}.

Let’s first remark that allowing the function {W} in the Modica-Mortola theorem to have more than two zeros does not suffice. Indeed, if we allow {W} to have zeros {\alpha<\beta<\gamma}, then the limiting phase will take only two values {\alpha} and {\beta} or {\beta} and {\gamma}, depending on the constraint {\int_\Omega u=c}. This means that functionals of the form we presented above cannot be represented as a {\Gamma}-limit when the function {W} is scalar, but with more than two zeros. This obstacle can be overcome by passing to the multidimensional case. This approach is presented by Sisto Baldo in [1] and we will present the ideas of this approach below.

Consider {\Omega \subset \Bbb{R}^n}, {N \geq 2} be an open bounded set with Lipschitz continuous boundary. Take {u=(u^1,...,u^n)\in L^1(\Omega; \Bbb{R}^n)} such that {u^i \geq 0} for every {i=1..n} with the constraint

\displaystyle \int_\Omega u(x)dx=m

where {m=(m^1,...,m^n) \in \Bbb{R}^n_+} is given. We also take a function {W : \Bbb{R}^n_+ \rightarrow [0,\infty)}, with exactly {k} zeros {\alpha_1,...,\alpha_k \in \Bbb{R}^n_+}. ({\Bbb{R}^n_+=\{(x_1,...,x_n) \in \Bbb{R}^n : x_i \geq 0,\ i =1..n\}}) We assume that {m} satisfies the condition

\displaystyle \min\{ \alpha^i_1,...,\alpha^i_k\} \leq \frac{m^i}{|\Omega|} \leq \max\{\alpha^i_1,...,\alpha^i_k\},\ i=1..n

We make a technical assumption on {W}: there exist {0\leq K_1 < K_2 \in \Bbb{R}} such that

\displaystyle W(u) \geq \sup\{W(v) : v \in [K_1,K_2]^n\} \text{ for every }u \notin [K_1,K_2]^n.\ \ \ \ \ (1)

We will need another condition on {W} which will allow us to correct the volume when constructing an approximating sequence in the proof of the (LS) property. For this one of the two conditions mentioned below will suffice:

  • (a) {W} is bounded from above.
  • (b) {W} converges superlinear to zero near all of the {\alpha_i,\ i=1..k}, i.e. for all {i=1..k} there exists a small ball {B_i} centered at {\alpha_i} and some numbers {p_i>1, S>0} such that

    \displaystyle W(x) \leq S|x-\alpha_i|^{p_i} \text{ on }B_i.

Also define the following metric on {\Bbb{R}^n_+}:

\displaystyle d(\xi_1,\xi_2)= \inf\{ \int_0^1 \sqrt{W(\gamma(t))}|\gamma'(t)|dt : \gamma \in C^1([0,1];\Bbb{R}^n_+),\gamma(0)=\xi_1,\gamma(1)=\xi_2\}.

It is easy to see that {d(\xi_1,\xi_2)} does not depend on the parametrization of the path {\gamma}; this is a simple consequence of the change of variables property.

Remark 1 The definition of the metric above is equivalent to

\displaystyle d(\xi_1,\xi_2)= \inf\{ \int_0^1 \sqrt{W(\gamma(t))}|\gamma'(t)|dt : \gamma \in L([0,1];\Bbb{R}^n_+),\gamma(0)=\xi_1,\gamma(1)=\xi_2\},

where we have denoted by {L([0,1],\Bbb{R}^n_+)} the space of paths which are Lipschitz continuous. The restriction of the definition to {C^1} paths is good for the proof of the theorem from the article of Baldo [1], but the space of {C^1} functions does not have sufficiently strong compactness properties we will need in the final of this section. A sequence of {C^1} paths which converges uniformly, does not necessarily have a {C^1} limit, but a sequence of Lipschitz continuous paths, for which the Lipschitz constants are bounded from above, if it converges uniformly to a path, then this path is also Lipschitz continuous.Note that a continuous path {\gamma} is ‘nice’ enough, in the sense that it has a well defined length if it is rectifiable, i.e. it has bounded variation. Restricting ourselves to the class of Lipschitz path, not only do we get a well defined length for our path, but we also know that the path is almost everywhere differentiable, and the fundamental formula of calculus works. Furthermore, Lipschitz continuous paths, as well as {C^1} paths have an arclength parametrization, which allows us to parametrize them with constant speed on {[0,1]}. For more details see [2] Chapter 4.

To see that the two definitions of the metric are equivalent, it is enough to pick a Lipschitz continuous path {\gamma} and show that

\displaystyle F(\gamma)=\int_0^1 \sqrt{W(\gamma(t))}|\gamma'(t)|dt

can be approximated as well as we want by {F(\lambda)} where {\lambda} is a {C^1} path. For this note that {\gamma' \in L^\infty([0,1],\Bbb{R}^n_+)\subset L^1([0,1],\Bbb{R}^n_+)} so we can approximate {\gamma'} in the {L^1} norm by a continuous function {\sigma} such that {\|\sigma-\gamma'\|_1<\varepsilon}. Define {\lambda: [0,1] \rightarrow \Bbb{R}^n_+} by {\lambda(t)=\gamma(0)+\int_0^t \sigma(t)dt}. From here we deduce right away that {\lambda} is {C^1} and for every {t \in [0,1]} we have

\displaystyle |\gamma(t)-\lambda(t)|\leq \left|\gamma(t)-\gamma(0)-\int_0^t \gamma'(t)dt\right|+\left|\int_0^t \gamma'(t)dt-\int_0^t \sigma(t)dt\right|<\varepsilon.

Using the above estimates and the fact that the paths close to the infimum must be contained in a compact set as a consequence of 1, it follows that the two definitions are equivalent. In view of this fact, we will use in each case the definition which suits best our needs.

Using the above metric we can define for {i=1..k} the following functions:

\displaystyle \varphi_i :\Bbb{R}^n_+ \rightarrow [0,\infty),\ \varphi_i(\xi)=d(\xi,\alpha_i).

The following proposition states a couple of properties of the metric {d}.

Proposition 1 The function {\varphi_i} is locally Lipschitz continuous. Moreover, if {u \in H^1(\Omega)\cap L^\infty(\Omega)}, then {\varphi_i \circ u \in W^{1,1}(\Omega)} and the following inequality holds:

\displaystyle \int_\Omega |D(\varphi_i \circ u)(x)|dx \leq \int_\Omega \sqrt{W(u(x))}|D u(x)|dx.

Proof: First, it is easy to see that because {d} is a metric we have for every {a,b \in [K_1,K_2]^h} and for every {i=1..k}

\displaystyle |\varphi_i(a)-\varphi_i(b)|=|D(a,\alpha_i)-d(b,\alpha_i)|\leq d(a,b)

Now let’s estimate {d(a,b)}.

Suppose {K \subset \Bbb{R}} is a compact, and define {\widetilde{K}} to be the convex hull of {K} which is also a compact and {C=\sup_{x \in \widetilde{K}} W(x)}. Since

\displaystyle d(a,b)= \inf\{ \int_0^1 \sqrt{W(\gamma(t))}|\gamma'(t)|dt : \gamma \in C^1([0,1];\Bbb{R}^n_+),\gamma(0)=a,\gamma(1)=b\},

by picking, for example {\gamma(t)=(1-t)a +tb} we get that

\displaystyle d(a,b) \leq \int_0^1 \sqrt{W(\gamma(t))}|a-b|dt \leq C |a-b|.

This proves that {\varphi_i} is locally Lipschitz-continuous for all {i=1..k}.

If the inequality

\displaystyle \int_{\Omega '}|D(\varphi_i\circ u)| \leq \int_{\Omega '} \sqrt{W(u(x))}|D u(x)|dx

holds for every open set {\Omega ' \subset \Omega} then the proposition is proved by using the monotone convergence theorem.

Consider the case {u \in C^1(\Omega)}. The function {\varphi_i \circ u} is locally Lipschitz-continuous, and therefore differentiable almost everywhere in {\Omega}. Take {x} a differentiability point of {\varphi_i\circ u}, {(x_h)} be a sequence converging to {x \in \Omega} and {\sigma_h} be the segment {\sigma_h(t)=(1-t)x+tx_h}. By the definition of the metric {d} we have

\displaystyle |\varphi_i\circ u(x)-\varphi_i\circ u(x_h)| \leq d(u(x),u(x_h)) \leq

\displaystyle \leq \int_0^1 \sqrt{W(u(\sigma_h(t))}|D(u(\sigma_h(t))||x_h-x|dt=\sqrt{W(u(\sigma_h(t_h))}|D(u(\sigma_h(t_h))||x_h-x|,

where in the last equality we have applied the mean value theorem and {t_h \in [0,1]}, i.e. {\sigma_h(t_h) \in [x,x_h]}. ({[s,t]} denotes the segment between {s} and {t} for every {s,t \in \Bbb{R}^n}) Divide the inequality obtained by {|x-x_h|} and take the limit as {h \rightarrow \infty} to get

\displaystyle |D(\varphi_i(u(x))| \leq \sqrt{W(u(x))}|D u(x)|.

Take {u \in H^1(\Omega) \cap L^\infty(\Omega)}, and let {(u_h) \subset C^1(\Omega)} be a sequence such that {u_h \rightarrow u} in {H^1(\Omega)}. By passing, eventually, to subsequences we can assume that {u_h(x) \rightarrow u(x)} and {Du_h(x) \rightarrow Du(x)} almost everywhere in {\Omega}. Take {g \in C_0^1(\Omega',\Bbb{R}^n),\ |g|\leq 1} in {\Omega'}. Then we have

\displaystyle \int_{\Omega'} (\varphi_i\circ u) \text{div} g dx = \lim_{h \rightarrow \infty} \int_{\Omega'} (\varphi_i \circ u_h)\text{div} g dx \leq

\displaystyle \leq \lim_{h \rightarrow \infty} \int_{\Omega'} |D(\varphi_i \circ u_h)|dx\leq

\displaystyle \leq \limsup_{h \rightarrow \infty} \int_{\Omega '} \sqrt{W(u_h(x))}|D u_h(x)|dx \leq

\displaystyle \leq \int_{\Omega '}\sqrt{W(u(x))}|D u(x)|dx,

where we have used the dominated convergence theorem, the inequality obtained for {C^1(\Omega)} functions and Fatou’s Lemma. This finishes the proof of the proposition. \hfill {\square}

Given two regular positive Borel measures {\mu} and {\nu} on {\Omega}, we define the supremum {\mu \vee \nu} as the smallest regular positive measure which is greater or equal to {\mu} and {\nu} on all Borel subsets of {\Omega}. We have

\displaystyle (\mu \vee \nu)(A)=\sup \{ \mu(A')+\nu(A'') : A',A'' \text{ open sets in }\Omega, A' \cap A''=\emptyset,\ A'\cup A'' \subset \Omega \}.

In the same way we can define recursively the supremum of more than two measures defined on {\Omega}.

Consider a function {u: \Omega \rightarrow \Bbb{R}^n} such that {W(u(x))=0} almost everywhere and {\varphi_i \circ u \in BV(\Omega)}. This implies that

\displaystyle u(x)=\sum_{i=1}^k \alpha_i \chi_{S_i}(x),

where {S_1,...,S_k} are pairwise disjoint subsets of {\Omega} such that {|\Omega \setminus (S_1 \cup.. \cup S_k)|=0}. The next proposition proves that the supremum of some well chosen measures on {\Omega} is the expression we are looking for.

Proposition 2 Denote {\mu_i} the Borel measure {\mu_i : E \mapsto \displaystyle \int_E |D(\varphi_i \circ u)|}. Then {\text{Per}_\Omega(S_i)<\infty} for every {i=1..k} and

\displaystyle \left(\bigvee_{i=1}^k \mu_i \right)(\Omega)=\frac{1}{2} \sum_{i,j=1}^k d(\alpha_i,\alpha_j) \mathcal{H}^{N-1} (\partial^* S_i \cap \partial^* S_j \cap \Omega).

Proof: First, let’s prove that the sets {S_i} have finite perimeter. We apply the coarea-type Fleming-Rishel formula.

\displaystyle \int_\Omega |D(\varphi_i \circ u)| = \int_{-\infty}^\infty \text{Per}_\Omega(\{x \in \Omega : \varphi_i(u(x))\leq t \})dt \geq

\displaystyle \geq \int_0^{d_i} \text{Per}_\Omega(\{x \in \Omega : \varphi_i(u(x))\leq t \})dt=d_i \text{Per}_\Omega(S_i)

where {d_i=\min_{j \neq i}d(\alpha_i,\alpha_j)>0}. This implies that {\text{Per}_\Omega(S_i)<\infty} for {i=1..k}. We present now a lemma which will help us prove the proposition.

Lemma 3 Let {\mu} be a regular Borel measure on {\Omega} and {B_1,...,B_m} be disjoint Borel subsets of {\Omega} with finite {\mu}-measure and {c_i^h,\ i=1..m,\ h=1..k} be positive coefficients. Define

\displaystyle \mu_h(A)=\sum_{i=1}^m c_i^h \mu(A\cap B_i) \hspace{1cm} \nu(A)=\sum_{i=1}^m \max_h c_i^h \mu(A\cap B_i).

Then {\nu =\displaystyle \bigvee_{i=1}^k \mu_h}.

The proof of the lemma is quite simple. It suffices to notice that {\mu_h\leq \nu} for every {h=1..k}, and for each open set {S \subset B_i,} for some {i=1..m} we have

\displaystyle \max_h \mu_h(S)= \nu(S).

Let’s now return to the proof of the proposition. Pick {\Omega'} an open subset of {\Omega}. Then for every {i,j=1..k,\ i \neq j} we have

\displaystyle \text{Per}_{\Omega'}(S_i \cup S_j)=\mathcal{H}^{N-1}(\partial^* S_i \Delta \partial^* S_j \cap \Omega')

and we also have

\displaystyle \partial^* S_i =\bigcup_{\substack{i=1 \\ i \neq j}}^k (\partial^* S_i\cap \partial^* S_j) \cup N

where {\mathcal{H}^{N-1}(N)=0}. A simple inductive argument on the cardinality of {J \subset \{1,..,k\}} proves that

\displaystyle \text{Per}_{\Omega'}(\bigcup_{j \in J}S_j)=\sum_{j \in J} \sum_{\substack{i=1 \\ i \notin J}}^k \mathcal{H}^{N-1}(\partial^* S_i \cap \partial^* S_j \cap \Omega')

Pick {i=1} and suppose for simplicity that

\displaystyle 0=\varphi_1(\alpha_1)\leq \varphi_1(\alpha_2)\leq...\leq\varphi_1(\alpha_k).

By the Fleming-Rishel coarea formula and the above result we obtain

\displaystyle \int_{\Omega'} |D(\varphi_1 \circ u)| = \int_0^{\varphi_1(\alpha_k)} \text{Per}_{\Omega'}( \{ x \in \Omega' : \varphi_1(u(x)) \leq t\})dt

\displaystyle = \sum_{j=1}^{k-1} [\varphi_1(\alpha_{j+1})-\varphi_1(\alpha_j)]\text{Per}_{\Omega'} \left(\bigcup_{l=1}^j S_l \right)

\displaystyle = \sum_{j=1}^{k-1}\sum_{l=1}^j \sum_{m=j+1}^k [\varphi_1(\alpha_{j+1})-\varphi_1(\alpha_j)] \mathcal{H}^{N-1} (\partial^* S_l \cap \partial^* S_m \cap \Omega')

\displaystyle = \sum_{l=1}^{k-1} \sum_{m=l+1}^k \sum_{j=l}^{m-1} [\varphi_1(\alpha_{j+1})-\varphi_1(\alpha_j)] \mathcal{H}^{N-1} (\partial^* S_l \cap \partial^* S_m \cap \Omega')=

\displaystyle = \sum_{l=1}^{k-1} \sum_{m=l+1}^k [\varphi_1(\alpha_{m})-\varphi_1(\alpha_l)] \mathcal{H}^{N-1} (\partial^* S_l \cap \partial^* S_m \cap \Omega')

The last part of the above equalities was obtained by changing the order of summation. Notice that the coefficient of {\mathcal{H}^{N-1} (\partial^* S_l \cap \partial^* S_m \cap \Omega')} depends only on {\varphi_1(\alpha_m)-\varphi_1(\alpha_l)} and because of the supposed ordering it is equal to {|\varphi_1(\alpha_m)-\varphi_1(\alpha_l)|}. The same result can be obtained for every {i=1..k}. Therefore for every {i=1..k} we have the following equality

\displaystyle 2 \int_{\Omega'} |D(\varphi_i\circ u)|=\sum_{j,h=1}^k \mathcal{H}^{N-1}(\partial^* S_j \cap \partial^* S_h \cap \Omega')| \varphi_i(\alpha_h)-\varphi_i(\alpha_j)|.

Note that for every {i=1..k}, because of the definition of {\varphi_i} we have

\displaystyle |\varphi_i(\alpha_h)-\varphi_i(\alpha_j)|\leq d(\alpha_h,\alpha_j)

for every {h,j=1..k} with equality if and only if {i=h} or {i=j}. Now we can apply the above lemma for {c_{j,h}^i=|\varphi_i(\alpha_h)-\varphi_i(\alpha_j)|} and

\displaystyle \mu_i(\Omega')=\frac{1}{2}\sum_{j,h=1}^k c_{j,h}^i\mathcal{H}^{N-1}(\partial^* S_j \cap \partial^* S_h \cap \Omega').

Because {\max\limits_i c_{j,h}^i =d(\alpha_j,\alpha_h)}, from the lemma we find that

\displaystyle \left(\bigvee_{i=1}^k \mu_i \right)(\Omega')=\frac{1}{2} \sum_{i,j=1}^k d(\alpha_i,\alpha_j) \mathcal{H}^{N-1} (\partial^* S_i \cap \partial^* S_j \cap \Omega'),

from which the desired result follows if we take {\Omega'=\Omega}. \hfill {\square}

Using the result of the previous proposition we define for every {u \in L^1(\Omega)}

\displaystyle F_0(u)=\begin{cases} \displaystyle 2 \bigvee_{i=1}^k \int_\Omega |D(\varphi_i\circ u)| & \varphi_i\circ u \in BV(\Omega),\ W(u(x))=0 \text{ a.e.},\\ & \displaystyle \int_\Omega u(x)dx=m \\ \infty & \text{otherwise} \end{cases}

and notice that when {F_0(u)<\infty} we have {u(x)=\sum_{i=1}^k \alpha_i \chi_{S_i}} with {S_1,..,S_k \subset \Omega}, {|\Omega \setminus (S_1\cup ... \cup S_k)|=0}, {\text{Per}_\Omega(S_i)<\infty} and

\displaystyle F_0(u)= \sum_{i,j=1}^k d(\alpha_i,\alpha_j) \mathcal{H}^{N-1} (\partial^* S_i \cap \partial^* S_j \cap \Omega)

Define, also, for every {\varepsilon>0} and for any {u \in L^1(\Omega)}

\displaystyle F_\varepsilon(u)=\begin{cases} \displaystyle \int_\Omega \left[ \varepsilon |D u|^2 +\frac{1}{\varepsilon}W(u)\right] dx & \displaystyle u \in H^1(\Omega; \Bbb{R}^n_+),\ \int_\Omega u(x)dx=m \\ \infty & \text{otherwise} \end{cases}

Now we are able to state the main result of this section.

Theorem 4 (Baldo) Under the above considerations we have

\displaystyle F_\varepsilon \stackrel{\Gamma}{\longrightarrow} F_0 \text{ in } L^1(\Omega; \Bbb{R}^n).

Proof: As in every {\Gamma}-convergence proof, we split the proof in two parts: the proof of (LI) and the proof of (LS). We first approach the proof of (LI), which is immediate, due to Propositions 1 and 2.

Pick {u \in L^1(\Omega;\Bbb{R}^n)} and a sequence {(u_h) \subset L^1(\Omega;\Bbb{R}^n)} such that {u_h \rightarrow u} in {L^1(\Omega;\Bbb{R}^n)}. Take a sequence {\varepsilon_h \rightarrow 0}. It is not restrictive to assume that {\lim\limits_{ h \rightarrow \infty} F_{\varepsilon_h}(u_h)} exists and is finite. We can choose a subsequence {u_{h_k}} that converges to {u} pointwise almost everywhere in {\Omega}, and by Fatou’s lemma we have

\displaystyle \int_\Omega W(u)dx \leq \liminf_{ k \rightarrow \infty} \int_\Omega W(u_{h_k})dx \leq \liminf_{\varepsilon_{h_k} \rightarrow 0} \varepsilon_{h_k}F_{\varepsilon_{h_k}}(u_{h_k})=0.

Since {W} is continuous and non-negative it follows that {W(u(x))=0} almost everywhere in {\Omega}. Now we need to prove that

\displaystyle 2\bigvee_{i=1}^k \int_\Omega |D(\varphi_i \circ u)| \leq \lim_{h \rightarrow \infty} \int_\Omega \left[\varepsilon_h |D u_h |^2+\frac{1}{\varepsilon_h} W(u_h) \right]dx . \ \ \ \ \ (2)

Using the assumption (1) on {W} we can further reduce the problem to the case {u_h} is equibounded, i.e. replace each scalar component of {u_h} with its truncation with respect to {K_1} and {K_2}. To simplify the notations, we denote the truncated sequence {(u_h)} like the initial sequence. Since {W(u(x))=0} almost everywhere in {\Omega} we see that {u} takes, up to a set of measure zero, only the values {\alpha_1,...,\alpha_k} which all are in {[K_1,K_2]^h}. This means that the truncation does not affect the {L^1(\Omega;\Bbb{R}^n)} convergence of {(u_h)} to {u}. Note also that by truncation the value of the integrals in the right hand side of (2) decrease, so we are going to prove a stronger inequality. Since {\varphi_i} is locally Lipschitz-continuous it follows that {(\varphi_i \circ u_h) \rightarrow (\varphi_i \circ u)} in {L^1(\Omega)} for every {i=1..k}. By the lower semicontinuity of the total variation we have for every {i=1..k}

\displaystyle |D(\varphi_i \circ u)|(A)\leq \liminf_{h \rightarrow \infty} |D(\varphi_i \circ u_h)|(A)

for every open set {A \subset \Omega}. In the following formulas the supremums are taken over all families {(A_i)} of pairwise disjoint open subsets of {\Omega}.

\displaystyle \bigvee_{i=1}^k |D(\varphi_i \circ u)|(\Omega)= \sup\left\{ \sum_{i=1}^k |D(\varphi_i \circ u)|(A_i)\right\}

\displaystyle \leq \sup \left\{\sum_{i=1}^k \liminf_{h \rightarrow \infty} |D(\varphi_i \circ u_h)|(A_i) \right\}

\displaystyle \leq \sup \left\{ \liminf_{h \rightarrow \infty} \sum_{i=1}^k |D(\varphi_i \circ u_h)|(A_i) \right\}

\displaystyle \leq \liminf_{h \rightarrow \infty} \left(\sup\left\{ \sum_{i=1}^k |D(\varphi_i \circ u_h)|(A_i)\right\}\right)=

\displaystyle = \liminf_{h \rightarrow \infty} \bigvee_{i=1}^k |D(\varphi_i \circ u_h)|(\Omega)

Using the result of Proposition 1 we get

\displaystyle \liminf_{h \rightarrow \infty} \bigvee_{i=1}^k |D(\varphi_i \circ u_h)|(\Omega) \leq \liminf_{h \rightarrow \infty} \int_\Omega \sqrt{W(u_h)}|D u_h(x)|dx \leq

\displaystyle \leq \frac{1}{2} \liminf_{h \rightarrow \infty} \int_\Omega \left[\varepsilon_h |D u_h|^2+\frac{1}{\varepsilon_h} W(u_h) \right]dx

Combining the above results we obtain

\displaystyle F_0(u) \leq \liminf_{h \rightarrow \infty} F_{\varepsilon_h}(u_h),

and the proof of (LI) is complete.

Let’s now turn to the proof of the (LS) property. Pick {u \in L^1(\Omega;\Bbb{R}^n)}. If {F_0(u)=\infty} then any sequence which converges to {u} in {L^1 (\Omega;\Bbb{R}^n)} will satisfy (LS). Therefore, without loss of generality we can assume that

\displaystyle u(x)=\sum_{i=1}^k \alpha_i\chi_{S_i},

where {S_1,..,S_k} are pairwise disjoint sets with finite perimeter in {\Omega} such that {|\Omega\setminus (S_1 \cup ... \cup S_k)|=0}.

The following lemma, whose proof can be found in the article of Baldo [1], allows us to consider only partitions with the property that {S_1,...,S_k} are polygonal domains in {\Bbb{R}^N} with {\mathcal{H}^{N-1}(\partial S_i \cap \partial \Omega)=0}.

Lemma 5 Let {\{S_1,...,S_k\}} be a partition of {\Omega} like in the expression of {u}. Then there exists a sequence {\{S_1^h,...,S_k^h\}} of partitions of {\Omega} such that

  • (i) {S_i^h} is a polygonal domain and {\mathcal{H}^{N-1}(\partial S_i^h \cap \partial \Omega)=0} for any {i=1..k} and for all {h \in \Bbb{N}};
  • (ii) If {\displaystyle u_h(x)=\sum_{i=1}^k \alpha_i \chi_{S_i^h}(x)} then {u_h \rightarrow u} in {L^1(\Omega;\Bbb{R}^n)} as {h \rightarrow \infty};
  • (iii) {\displaystyle \int_\Omega u_n(x)dx=\int_\Omega u(x)dx=m} for all {h \in \Bbb{N}};
  • (iv) {\displaystyle \lim_{h \rightarrow \infty} \bigvee_{i=1}^k \int_\Omega |D(\varphi_i\circ u_h)|=\bigvee_{i=1}^k \int_\Omega |D(\varphi_i\circ u)|}.

We need another lemma which generalizes an idea of Modica [4]. We assume for simplicity that for every {i,j=1..k,\ i\neq j} there exists a distance minimizing geodesic connecting {\alpha_i} and {\alpha_j}, i.e. we suppose that there exists a {C^1}-path {\gamma_{ij}} such that {\gamma_{ij}(0)=\alpha_i,\ \gamma_{ij}(1)=\alpha_j} and

\displaystyle d(\alpha_i,\alpha_j) =\int_0^1 \sqrt{W(\gamma_{ij}(t))}|\gamma_{ij}'(t)|dt.

We will see in the end of the proof how to modify the proof if such geodesics do not exist. Also note that since the value of {d(\alpha_i,\alpha_j)} is independent of the parametrization of the path {\gamma_{ij}([0,1])}, we can assume that {|\gamma_{ij}'(t)| \neq 0} for every {t \in (0,1)}.

Lemma 6 Consider the following family of differential equations:

\displaystyle (y_\varepsilon^{ij})'=\frac{\sqrt{\delta+W(\gamma_{ij}(y_\varepsilon^{ij}))}}{\varepsilon |\gamma_{ij}'(y_\varepsilon^{ij})|^2} \ \ \ \ \ (3)

for {i,j=1..k,\ i\neq j,\ \varepsilon>0} and {\delta>0} fixed.Then, for every {\varepsilon>0} there exists a Lipschitz continuous function {\chi_\varepsilon :\Bbb{R}^{k-1} \rightarrow \Bbb{R}^n} and three constants {C_1,C_2} and {C_3} depending only on {\delta} such that:

  • (i) {\displaystyle \chi_\varepsilon(t_1,..,t_{k-1})=\begin{cases}\alpha_1 & t_1<0 \\ \alpha_i & t_1>C_1\varepsilon,..,t_{i-1}>C_1\varepsilon,\ t_i<0 ,\text{ for }i=2..k-1 \\ \alpha_k & t_1>C_1 \varepsilon,..,t_{k-1}>C_1 \varepsilon \end{cases}}
  • (ii) {|\chi_\varepsilon|<C_2,\ |D \chi_\varepsilon|<C_3/\varepsilon} almost everywhere in {\Bbb{R}^{k-1}}.
  • (iii) If {j>i} then on the set

    \displaystyle \{t \in \Bbb{R}^{k-1} : 0<t_i<C_1 \varepsilon,\ t_j<0,\ t_h >C_1 \varepsilon \text{ for any }h\neq i,j\}

    {\chi_\varepsilon} depends only on {t_i} and we can write

    \displaystyle \chi_\varepsilon (t_i)=\gamma_{ij}(y_\varepsilon^{ij})),

    for any {t_i} such that {\chi_\varepsilon(t_i)\neq \alpha_i}, where {y_\varepsilon^{ij}} solves (3).

Proof of the lemma 6: We only need to find the constants {C_1,C_2} and {C_3} and to define {\chi_\varepsilon} at the points different from those considered in (i). Define

\displaystyle \psi_\varepsilon(t)=\int_0^t \frac{\varepsilon |\gamma_{ij}'(s)|}{\sqrt{\delta+W(\gamma_{ij}(s))}}ds,\ t \in [0,1],

and note that {\psi_\varepsilon} is increasing. If we denote {\eta_\varepsilon=\psi_\varepsilon(1)}, we immediately obtain {\eta_\varepsilon \leq \varepsilon \delta^{-1/2} \text{length}(\gamma_{ij})}. The inverse function {\widetilde{y}_\varepsilon :[0,\eta_\varepsilon] \rightarrow [0,1]} of {\psi_\varepsilon} satisfies the differential equation (3). We extend this function to the whole {\Bbb{R}} by putting

\displaystyle y_\varepsilon^{ij}(t)=\begin{cases} 0 & t\leq 0 \\ \widetilde{y}_\varepsilon & 0 \leq t \leq \eta_\varepsilon \\ 1 & t \geq \eta_\varepsilon \end{cases}

Therefore {y_\varepsilon^{ij}} is a Lipschitz continuous function satisfying (3) in every point where {y_\varepsilon^{ij}\neq 0,1}. Putting

\displaystyle C_1=\max_{i,j=1..k}\{\delta^{-1/2}\text{length}(\gamma_{ij})\}

we can define {\chi_\varepsilon} as required on the strips considered in (iii). We choose

\displaystyle C_2 > \sup\left\{ |x| : x \in \bigcup_{i,j=1}^k \gamma_{ij}([0,1])\right\}

and

\displaystyle K= \sup_{\substack{i,j=1..k \\ i \neq j,\ t \in [0,1]}} \left[ \frac{\sqrt{\delta+W(\gamma_{ij}(t))}}{|\gamma_{ij}'(t)|} \right]

Then {|\chi_\varepsilon|<C_2} and {|D \chi_\varepsilon|\leq K/\varepsilon} on the subsets of {\Bbb{R}^{k-1}} described in (i) and (iii). Using an extension result for Lipschitz continuous functions (for example Kirszbraun’s theorem, see [3]) we can extend {\chi_\varepsilon} to the whole {\Bbb{R}^{k-1}} and the extension has the same Lipschitz constant. Moreover, Rademacher’s theorem says that a Lipschitz map is almost everywhere differentiable, and its differential (at the points where it exists) is bounded by the best Lipshitz constant for our function. Therefore we can choose {C_3=K}. This ends the proof of the lemma.

We state here another Lemma following an idea of [4] which is used in our proof:

Lemma 7 Let {A} a polygonal domain {\Bbb{R}^N} and {\Omega} an open subset of {\Bbb{R}^N} such that {\mathcal{H}^{N-1}(\partial A \cap \partial \Omega)=0}. Define {h : \Bbb{R}^N \rightarrow \Bbb{R}} by

\displaystyle h(x)=\begin{cases} -d(x,\partial A) & x \in A \\ \hfill d(x,\partial A) & x \notin A \end{cases}.

Then {h} is Lipschitz continuous, {|Dh(x)|=1} for almost all {x \in \Bbb{R}^N} and if {S_t=\{x \in \Bbb{R}^N : h(x)=t\}} then

\displaystyle \lim_{t \rightarrow 0}\mathcal{H}^{N-1}(S_t \cap \Omega)=\mathcal{H}^{N-1}(\partial A \cap \Omega).

We will now prove the (LS) property for domains with properties given in Lemma 5. Note that for sets having the properties mentioned in the lemma we can apply the results of Lemma 7. We define for {i \in \{1,..,k\}}

\displaystyle h_i(x)=\begin{cases} d(x,\partial S_i) & x \notin S_i \\ -d(x,\partial S_i) & x \in S_i \end{cases}

We fix {Delta>0} and note that for {\varepsilon} small enough we have {|D h_i(x)|=1} almost everywhere on the set {\{ x \in \Omega : |h_i(x)| <C_1\varepsilon \}}, for all {i=1..k}. Consider the sequence of functions

\displaystyle \widetilde{u}_\varepsilon(x)=\chi_\varepsilon(h_1(x),..,h_{k-1}(x)).

We denote {\Sigma_i^t=\{x \in \Omega h_i(x)=t\},\ t>0,\ i=1..k-1} and using the coarea formula we obtain

\displaystyle \int_\Omega |\widetilde{u}_\varepsilon-u|dx \leq \sum_{i=1}^{k-1} \int_{\{ x \in \Omega : 0< h_i(x)<C_1 \varepsilon\}} |\widetilde{u}_\varepsilon-u|dx \leq

\displaystyle \leq 2C_2 \sum_{i=1}^{k-1} |\{ x \in \Omega : 0< h_i(x)<C_1 \varepsilon\}|=

\displaystyle = 2C_2 \sum_{i=1}^{k-1} \int_{\{ x \in \Omega : 0< h_i(x)<C_1 \varepsilon\}} |D h_i(x)|dx=

\displaystyle = 2C_2 \sum_{i=1}^{k-1} \int_0^{C_1\varepsilon} \mathcal{H}^{N-1} (\Sigma_i^t)dt\leq

\displaystyle \leq 2C_1C_2 \varepsilon \sum_{i=1}^{k-1} \sigma_\varepsilon^i

where {\sigma_{\varepsilon}^i=\sup\limits_{t \in [0,C_1 \varepsilon]} \mathcal{H}^{N-1}(\Sigma_i^t)} and because of Lemma 7 we have

\displaystyle \sigma_\varepsilon^i \rightarrow \mathcal{H}^{N-1}(\partial S_i \cap \Omega)<\infty,\ i=1..k-1.

This means that {\widetilde{u}_\varepsilon \rightarrow u} in {L^1(\Omega;\Bbb{R}^n)}. If { Displaystyle \int_\Omega \widetilde{u}_\varepsilon =m} then define {u_\varepsilon=\widetilde{u}_\varepsilon}. Otherwise denote

\displaystyle \eta_\varepsilon=\int_\Omega \widetilde{u}_\varepsilon(x)dx-\int_\Omega u(x)dx,

and find that {|\eta_\varepsilon|\leq k\varepsilon}, where {k>0} is a constant.

We denote

\displaystyle F_\varepsilon(u,\Omega')=\int_{\Omega'} \left[ \varepsilon |D u|^2+\frac{1}{\varepsilon} W(u) \right]dx

We will now define {u_\varepsilon} such that {\int_\Omega u_\varepsilon=m}, and {u_\varepsilon} equals {\widetilde{u}_\varepsilon} in all of {\Omega} except a small set, such that the values of {u_\varepsilon} on that small set will not affect the behavior of {u_\varepsilon} as {\varepsilon \rightarrow 0}. We will give two variants for the definition of {u_\varepsilon}, using the hypotheses (a) or (b) on {W}.

Remark 2 In the original article of Baldo [1] the hypotheses presented below are not present, and the proof presented in the article contains an error in this part where we build the recovery sequence and correct its integral. The same error is present in the article [4] of Modica. Hypothesis (a) is a quick fix, which is quite restrictive for {W} (no polynomial functions are allowed) even though, bu truncating {W} very far from its zeros by a constant function should not modify the {\Gamma}-convergence result.Hypothesis (b) was suggested by Sisto Baldo in a short email discussion I had with him about the absence of some necessary additional hypothesis for his proof to work.

Hypothesis (a) {W} is bounded from above by {Q>0}.

In this case we pick an open ball {B_\varepsilon=B(x_0,\varepsilon^{\alpha})} such that {x_0} is an interior point for {\Omega \cap S_1} (if this set is void, we relabel {S_i} such that {S_1} has the desired property). For {\varepsilon} small enough we have {B_\varepsilon \subset \{x \in \Omega : \widetilde{u}(x)=\alpha_1\}}. Therefore we may define {u_\varepsilon} as follows:

\displaystyle u_\varepsilon(x)=\begin{cases} \widetilde{u}(x) & x \in \Omega \setminus B_\varepsilon\\ \alpha_1 + h_\varepsilon (1-\varepsilon^{-\alpha}|x-x_0|) & x \in B_\varepsilon \end{cases}

where {h_\varepsilon} is chosen such that

\displaystyle \int_{B_\varepsilon} h_\varepsilon(1-\varepsilon^{-\alpha} |x-x_0|) dx =-\eta_\varepsilon.

We have

\displaystyle \int_{B_\varepsilon}( 1-\varepsilon^{-\alpha} |x-x_0|)dx= |B_\varepsilon|-\varepsilon^{-\alpha}\int_{B_\varepsilon} |x-x_0|dx=\frac{\omega_N \varepsilon^{N\alpha}}{N+1},

where {\omega_N} is the volume of the {N}-dimensional unit ball. Therefore we define

\displaystyle h_\varepsilon=-\frac{N+1}{\varepsilon^{N\alpha}\omega_N}\eta_\varepsilon,

and note that

\displaystyle |h_\varepsilon|\leq K \varepsilon^{1-N\alpha}.

We evaluate

\displaystyle \limsup_{\varepsilon \rightarrow 0}F_\varepsilon(u_\varepsilon,B_\varepsilon) \leq \limsup_{\varepsilon \rightarrow 0}\left(\varepsilon |h_\varepsilon|^2 \varepsilon^{-2\alpha} |B_\varepsilon|+\frac{1}{\varepsilon}|B_\varepsilon|Q\right)

and note that for {\alpha \in \left( \frac{1}{N},\frac{3}{N+2} \right)} the limit in the right hand side is zero. Therefore we may pick {\alpha} in the prescribed interval, which is non-void for {N \geq 2}, and we are done.

Hypothesis (b) {W} converges superlinear to zero near all of the {\alpha_i,\ i=1..k}, i.e. for all {i=1..k} there exists a small ball {B_i} centered at {\alpha_i} and some numbers {p_i>1,S>0} such that

\displaystyle W(x) \leq S|x-\alpha_i|^{p_i} \text{ on }B_i.

We pick an open ball {B_\varepsilon=B(x_0,k)} where {x_0} is an interior point of {S_1 \cap \Omega}, and {k} is small enough and fixed such that {B \subset B_1\cap S_1}. Note that for {k} small enough we have {B \subset \{x \in \Omega : \widetilde{u}(x)=\alpha_1\}}. We define {u_\varepsilon} as follows:

\displaystyle u_\varepsilon(x)=\begin{cases} \widetilde{u}(x) & x \in \Omega \setminus B\\ \alpha_1 + h_\varepsilon (k-|x-x_0|) & x \in B \end{cases}

where {h_\varepsilon=-\eta_\varepsilon (N+1)\omega_n^{-1}k^{-N-1}} is chosen such that {\int_\Omega u(x)dx=m}. It is easy to see that {|h_\varepsilon|\leq K'|\eta_\varepsilon| \leq K\varepsilon}, where {K,K'} are positive constants. It is still true that {u_\varepsilon \rightarrow u} in {L^1(\Omega ;\Bbb{R}^n)}. It remains to calculate:

\displaystyle \limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon,B)=\limsup_{\varepsilon \rightarrow 0} \left[ \varepsilon |h_\varepsilon|^2 |B|+\frac{1}{\varepsilon}\int_B W(\alpha_1+h_\varepsilon (k-|x-x_0|))dx \right]

\displaystyle \leq \lim_{\varepsilon \rightarrow 0} \frac{1}{\varepsilon}S|B|k^{p_1} |h_\varepsilon|^{p_1}=0

because {p_1>1}.

This new function {u_\varepsilon} satisfies the constraint {\displaystyle\int_\Omega u_\varepsilon =m} for {\varepsilon } small enough. To be able to give a good estimate for {\displaystyle \limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon)} we consider the following partition of {\Omega}:

\displaystyle B_\varepsilon

\displaystyle \Omega_1^\varepsilon = S_1 \setminus B_\varepsilon

\displaystyle \Omega_i^\varepsilon = \{x \in S_i : h_j(x) >C_j \varepsilon,\ j=1,..,i-1\,\ h_i(x)<0 \} \text{ for } i=2,..,k

\displaystyle \Omega_{ij}^\varepsilon =\big\{ x \in \Omega : 0<h_i(x)<C_1 \varepsilon,\ h_j(x)<0,\ h_l(x)>C_1\varepsilon ,\\ \forall l \in \{1,..,k-1\}\setminus \{i,j\} \big\}, \text{ for }i,j \in \{1,..,k\} \text{ and } i \neq j

\displaystyle \Omega_0^\varepsilon = \Omega \setminus \bigg( B_\varepsilon \cup \bigcup_{i=1}^k \Omega_i^\varepsilon \cup \bigcup_{\substack{i,j=1 \\ i \neq j}}^k \Omega_{ij}^\varepsilon \bigg)

We obviously have

\displaystyle \limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon) \leq \sum_{i=1}^k \limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon,\Omega_i^\varepsilon)+\limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon,B_\varepsilon)+

\displaystyle +\limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon,\Omega_0^\varepsilon)+\limsup_{\varepsilon \rightarrow 0}\sum_{\substack{i,j=1 \\ i \neq j}}^k F_\varepsilon(u_\varepsilon, \Omega_{ij}^\varepsilon)

We now discuss each of the terms in the right hand side of the above inequality. First, it is easy to note from the definition of {u_\varepsilon } and {\Omega_i^\varepsilon} that for {\varepsilon} small enough we have {u_i \equiv \alpha_i} on {\Omega_i^\varepsilon}, and this means that all the terms from the sum

\displaystyle \sum_{i=1}^k \limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon,\Omega_i^\varepsilon)

are zero.

We turn now to the second term, and note that by either of the constructions (a) or (b) we have

\displaystyle \limsup_{\varepsilon \rightarrow 0}F_\varepsilon(u_\varepsilon, B_\varepsilon)= 0.

We go now to the third term and define {K_{ij}^\varepsilon =\{x \in \Omega : 0 <h_i(x) <C_1 \varepsilon,\ 0<h_j(x)<C_1 \varepsilon \},\ i>j}. Since {\Omega_0^\varepsilon \subset Displaystyle \bigcup_{1\leq j<i\leq k} K_{ij}^\varepsilon}, by using Lemma 6 we find that there exists a constant {K} such that

\displaystyle F_\varepsilon(u_\varepsilon,K_{ij}^\varepsilon)=\int_{K_{ij}^\varepsilon} \left[\varepsilon |D u_\varepsilon|^2+\frac{1}{\varepsilon} W(u_\varepsilon) \right]dx \leq K\varepsilon^{-1} |K_{ij}^\varepsilon|.

Denote {S_j^t=\{ x \in \Omega : h_j(x) > \min\{ t,C_1\varepsilon\}\}} and using again the coarea formula we have

\displaystyle |K_{ij}^\varepsilon|= \int_0^{C_1 \varepsilon} \mathcal{H}^{N-1}(\{x \in \Omega : h_i(x)=s,\ 0< h_j(x) <C_1 \varepsilon\}) ds \leq

\displaystyle \leq C_1 \varepsilon \sup_{0\leq s \leq C_1\varepsilon} \mathcal{H}^{N-1} (\{x \in \Omega : h_i(x)=s,\ 0< h_j(x)<C_1\varepsilon\})

\displaystyle =C_1 \varepsilon \sup_{0\leq s \leq C_1 \varepsilon} \mathcal{H}^{N-1}\left( \Sigma_i^s \setminus S_j^{C_1 \varepsilon}\right).

For almost all {\rho>0} we have {\mathcal{H}^{N-1}(\partial S_i \cap \partial S_j^\rho)=0}, and therefore, using Lemma 7 we obtain:

\displaystyle \limsup_{\varepsilon \rightarrow 0} \sup_{0\leq s \leq C_1 \varepsilon} \mathcal{H}^{N-1}(\Sigma_i^s \setminus S_j^{C_1\varepsilon})

\displaystyle \leq \limsup_{\varepsilon \rightarrow 0} \sup_{0\leq s \leq C_1 \varepsilon} \mathcal{H}^{N-1}(\Sigma_i^s \setminus S_j^\rho)=

\displaystyle = \mathcal{H}^{N-1}((\partial S_i \cap (\Omega \setminus S_j))\setminus S_j^\rho).

Passing to the infimum for {\rho>0} we get that

\displaystyle \limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon,K_{ij}^\varepsilon)=0,

and this implies

\displaystyle \limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon,\Omega_0^\varepsilon)=0.

It remains now to estimate the terms of the form

\displaystyle \limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon,\Omega_{ij}^\varepsilon).

Using the coarea formula and (3) we obtain

\displaystyle F_\varepsilon(u_\varepsilon, \Omega_{ij}^\varepsilon) = \int_{\Omega_{ij}^\varepsilon} \left[ \varepsilon |D u_\varepsilon|^2+\frac{1}{2} W(u_\varepsilon) \right] dx =

\displaystyle = \int_{\Omega_{ij}^\varepsilon} \left[ \varepsilon |D (\gamma_{ij}\circ y_\varepsilon^{ij} \circ h_i)(x)|^2 +\frac{1}{\varepsilon}W(\gamma_{ij}\circ y_\varepsilon^{ij}\circ h_i)(x) \right]dx

\displaystyle = \int_0^{C_1 \varepsilon} \left[ \varepsilon |\gamma_{ij}'(y_\varepsilon^{ij}(t))|^2 [ (y_\varepsilon^{ij})'(t)]^2 +\frac{1}{\varepsilon} W(\gamma_{ij} \circ y_\varepsilon^{ij})(t) \right] \mathcal{H}^{N-1}(\Sigma_i^t \cap S_j)dt

\displaystyle \leq \tau_\varepsilon \int_0^{C_1 \varepsilon} \left[ \varepsilon |\gamma_{ij}'(y_\varepsilon^{ij}(t))|^2 [ (y_\varepsilon^{ij})'(t)]^2 +\frac{1}{\varepsilon}[\delta+ W(\gamma_{ij} \circ y_\varepsilon^{ij})(t)] \right]dt=

\displaystyle = \tau_\varepsilon \int_0^{C_1 \varepsilon}\left[ |\gamma_{ij}'(y_\varepsilon^{ij}(t))||(y_\varepsilon^{ij})'(t)|\sqrt{\delta+W(\gamma_{ij}\circ y_\varepsilon^{ij})(t)} \right]dt\leq

\displaystyle \leq \tau_\varepsilon \int_0^1 \sqrt{\delta+W(\gamma_{ij}(s))}|\gamma_{ij}'(s)|ds.

where

\displaystyle \tau_\varepsilon = \sup_{0 \leq t \leq C_1\varepsilon} \mathcal{H}_{N-1}(\Sigma_i^t \cap S_j)

and by Lemma 7 we have

\displaystyle \lim_{\varepsilon \rightarrow 0}\tau_\varepsilon =\mathcal{H}^{N-1}(\partial S_i \cap \partial S_j).

By passing to {\limsup} as {\varepsilon \rightarrow 0} in the above inequalities we get

\displaystyle \limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon) \leq \sum_{i,j=1}^k \mathcal{H}^{N-1}(\partial S_i \cap \partial S_j) \int_0^1 \sqrt{\delta+W(\gamma_{ij}(s))}|\gamma_{ij}'(s)|ds.

Passing now to infimum as {\delta>0} we get that

\displaystyle \inf_{\delta>0} \limsup_{\varepsilon \rightarrow 0}F_\varepsilon(u_\varepsilon) \leq F_0(u)

and by a diagonal argument the proof of (LS) property is finished.

The proof is now almost done. We assumed that geodesics {\gamma_{ij}} between {\alpha_i,\alpha_j} exist, but if such geodesics do not exist, then we can choose approximate geodesics {\gamma_{ij}^h} such that

\displaystyle \int_0^1 \sqrt{W(\gamma_{ij}^h(s))}|(\gamma_{ij}^h)'(s)|ds \leq d(\alpha_i,\alpha_j)+\frac{1}{h},

and reasoning as above we can construct a sequence {u_\varepsilon^h} such that

\displaystyle \limsup_{\varepsilon \rightarrow 0} F_\varepsilon(u_\varepsilon^h) \leq \sum_{i,j=1}^k \mathcal{H}^{N-1}(\partial S_i \cap \partial S_j)(d(\alpha_i,\alpha_j)+\frac{1}{h}),

and again a diagonal argument finishes the proof. \hfill {\square}

[1] Sisto Baldo, Minimal Interface Criterion for Phase Transitions in Mixtures of Cahn-Hilliard fluids [2] Leoni, Giovanni, A {F}irst {C}ourse in {S}obolev {S}paces, {Graduate {S}tudies in {M}athematics, {V}ol 105}, {American Mathematical Society}

[3] Federer, Herbert, Geometric measure theory

[4] Modica, Luciano, Gradient Theory of Phase Transitions with Boundary Contact Energy

Advertisements
  1. No comments yet.
  1. March 3, 2013 at 1:52 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: