Archive

Posts Tagged ‘shape optimization’

Print your Matlab models in 3D

December 13, 2015 Leave a comment

The emergence of 3D printers opens a whole new level of creation possibilities. Any computer generated model could be materialized as soon as it can be transformed in a language that the 3D printer can use. This is also the case with objects and structures which emerge from various mathematical research topics. Since I’m working on shape optimization problems I have lots of structures that would look nice printed in 3D. Below you can see an example of a 3D model and its physical realization by a 3D printer.

I want to show below how can  you can turn a Matlab coloured patch into a file which can be used by a 3D printer. The first step is to export the Matlab information regarding the position of the points, the face structure and the colours into an obj file format. This is not at all complicated. Vertex information is stored on a line of the form

\displaystyle v\ x\ y\ z\ R\ G\ B

where {v} is exactly the character {v}, {x,y,z} give the coordinates of the points and {R,G,B} give the colour associated to the point in the RGB format. The face information can be entered in a similar fashion:

\displaystyle f\ t_1\ t_2\ t_3

where {t_1, t_2, t_3} are the indices of the points in the corresponding face. Once such an obj file is created, it can be imported in MeshLab (a free mesh editing software). Once you’re in MeshLab you should be able to export the structure into any file format you want, which can be understood by a 3D printer (like STL). Once you have the stl file, you can go on a 3D printing website like Sculpteo and just order your 3D object.

Optimal triangles with vertices on fixed circles

October 5, 2015 Leave a comment

Let {x,y,z>0} and suppose {ABC} is a triangle such that there exists a point {O} which satisfies {OA=x}, {OB = y}, {OC = z}. What is the relative position of {O} with respect to the triangle {ABC} such that

a) The area is maximized;

b) The perimeter is maximized.

This was inspired by this answer on Math Overflow.

Read more…

Eigenvalues – from finite dimension to infinite dimension

December 11, 2013 Leave a comment

We can look at a square matrix {A \in \mathcal{M}_n(\Bbb{R})} and see it as a table of numbers. In this case, matrices {B} and {C} below are completely different:

\displaystyle B=\begin{pmatrix}1.48 & -0.36& -0.12 \\ -1.44 & 1.08 & -0.64 \\ 2.24 & 1.32 & 3.44 \end{pmatrix}, C=\begin{pmatrix}-1& 14 & 6 \\ -1 & 6 & 2 \\ 0.5 & -0.5& 1 \end{pmatrix}

If instead we look at a square matrix as at a linear transformation {f : \Bbb{R}^n \rightarrow \Bbb{R}^n, f(x)=Ax} things change a lot. Since the transformation is arbitrary, it seems normal that {A} does not act in every direction in the same way, and some directions are privileged in the sense that the transformation is a simple dilatation in those special directions, i.e. there exists {\lambda} and a non-zero vector {v} (the direction) such that {Av=\lambda v}. The values {\lambda} and the corresponding vectors {v} are so important for the matrix {A} that they almost characterize it; hence their names are eigenvalue and eigenvector which means own value and own vector (eigen = own in German). It turns out that {B} and {C} above both have the same eigenvalues {1,2,3}, and because they are distinct, both the matrices {B,C} are similar to the diagonal matrix {\text{diag}(1,2,3)} ({X} and {Y} are similar if there exists {P} invertible such that {X=PYP^{-1}}).

Read more…

Torricelli point and angles of 120 degrees

March 4, 2013 Leave a comment

Denote {ABC} a triangle with angles smaller than {120^\circ}. The point {T} which minimizes the sum {TA+TB+TC} is called the Torricelli point of the triangle {ABC}. One interesting property of the Toricelli point, besides the fact that it minimizes the above sum is that all the angles formed around {T} are equal and have {120^\circ}.

I will prove here that the fact that the angles around {T} are of {120^\circ} can be derived without any geometric considerations, just from the fact that {T} is the solution of the problem

 

\displaystyle \min_{T \in \Delta ABC } TA+TB+TC.

Read more…

Master 7

March 3, 2013 Leave a comment

(For the context and pervious posts look on the Shape Optimization page for the links)

As a result of the theorem proved in the previous post and of the fact that every {\Gamma}-limit is lower semicontinuous we can see that the functional

\displaystyle F_0(u)=\sum_{i,j=1}^k d(\alpha_i,\alpha_j) \mathcal{H}^{N-1}(\partial^* S_i \cap \partial^* S_j \cap \Omega)

is lower semicontinuous on the space {X=\{ u \in L^1(\Omega;\Bbb{R}^N) : u=\sum_{i=1}^k \alpha_i \chi_{S_i},S_i \subset \Omega,\text{Per}_\Omega(S_i)<\infty,|\Omega\setminus (S_1 \cup ... \cup S_k)|=0, \text{ and } \sum_{i=1}^k |S_i|\alpha_i=m \}}.

Definition 1 We say that a set of {\sigma_{ij},\ 1\leq i,j \leq k}, {i \neq j} is an admissible configuration if there exist linearly independent vectors {\alpha_1,..,\alpha_k \in \Bbb{R}_+^n} and a continuous function {W : \Bbb{R}^n_+ \rightarrow [0,\infty)} with zeros precisely at points {\alpha_i,\ i=1..k} such that we have {d(\alpha_i,\alpha_j)=\sigma_{ij},\ i,j=1..n}.

Using the above result we can see that if {\sigma_{ij},\ i,j=1..k} is an admissible configuration then the functional {\mathcal{F} :\mathcal{K} \rightarrow [0,\infty)} defined by

\displaystyle \mathcal{F}(S_1,..,S_k)=\sum_{1 \leq i <j \leq k} \sigma_{ij} \mathcal{H}^{N-1}(\partial^* S_i \cap \partial^* S_j \cap \Omega)

is lower semicontinuous on

\displaystyle \mathcal{K}=\{ (S_1,..,S_k) : S_i \subset \Omega, \text{Per}_\Omega(S_i)<\infty, |S_i|=c_i>0,\ c_1+...+c_k=|\Omega|\},

where {(c_i)} is the unique solution of the system

\displaystyle \sum_{i=1}^k c_i\alpha_i=m,

where {m} is chosen such that all {c_i} are positive.

Read more…

Master 6

March 3, 2013 1 comment

(For the context see the Shape Optimization page where you can find links to the first 5 parts)

A particular consequence of the Modica-Mortola Theorem is that the functional

\displaystyle \mathcal{F}(E_1,E_2)=\sigma \text{Per}_\Omega(\partial^* E_1 \cap \partial^* E_2)

is lower semicontinuous with respect to the {L^1(\Omega)} convergence for {\sigma>0} on the set

\displaystyle \mathcal{K}=\{ (E_1,E_2) : E_1\cup E_2=\Omega,\ E_1\cap E_2=\emptyset, |E_i|=c_i>0\}

where the equalities are, as usual, up to a set of measure zero. It would be nice if a similar result would be true for multi-phase systems, where a functional of the form

\displaystyle \mathcal{F}(E_1,E_2,...,E_k)=\sum_{1\leq i<j\leq k}\sigma_{ij}\text{Per}_\Omega(\partial^* E_i \cap \partial^* E_j)

is a {\Gamma}-limit and therefore semicontinuous, for {E=(E_i) \in \mathcal{K}} where

\displaystyle \mathcal{K}=\{ (E_1,...,E_k) : \bigcup_{i=1}^k E_i=\Omega,\ E_i\cap E_j=\emptyset, \text{ for }i\neq j, |E_i|=c_i>0\}.

Let’s first remark that allowing the function {W} in the Modica-Mortola theorem to have more than two zeros does not suffice. Indeed, if we allow {W} to have zeros {\alpha<\beta<\gamma}, then the limiting phase will take only two values {\alpha} and {\beta} or {\beta} and {\gamma}, depending on the constraint {\int_\Omega u=c}. This means that functionals of the form we presented above cannot be represented as a {\Gamma}-limit when the function {W} is scalar, but with more than two zeros. This obstacle can be overcome by passing to the multidimensional case. This approach is presented by Sisto Baldo in [1] and we will present the ideas of this approach below.

Read more…

Master 5

March 1, 2013 1 comment

(If you are interested check out: Parts 1, 2, 3, 4, and the Shape Optimization page)

The notion of {\Gamma}-convergence was introduced by E. De Giorgi and T. Franzioni in [1]. For an introduction in the subject see the books [2], [3], by Andrea Braides and the free document [4] by the same author.

 

Definition 1 Let {X} be a metric space, and for {\varepsilon >0} let be given {F_\varepsilon : X \rightarrow [0,\infty]}. We say that {F_\varepsilon} {\Gamma}-converges to {F} on {X} as {\varepsilon \rightarrow 0}, and we write {\Gamma-\lim F_\varepsilon =F} or {F_\varepsilon \stackrel{\Gamma}{\longrightarrow} F}, if the following conditions hold:

  • (LI) For every {u \in X} and every sequence {(u_\varepsilon)} such that {u_\varepsilon \rightarrow u} in {X} we have

    \displaystyle \liminf_{\varepsilon \rightarrow 0}F_\varepsilon(u_\varepsilon)\geq F(u)

  • (LS) For every {u \in X} there exists a sequence {(u_\varepsilon)} such that {u_\varepsilon \rightarrow u} in {X} and

    \displaystyle \limsup_{\varepsilon \rightarrow 0}F_\varepsilon(u_\varepsilon)\leq F(u).

Read more…

%d bloggers like this: