Archive

Posts Tagged ‘eigenvalues’

Numerical method – minimizing eigenvalues on polygons

December 23, 2013 1 comment

I will present here an algorithm to find numerically the polygon {P_n} with {n} sides which minimizes the {k}-th eigenvalue of the Dirichlet Laplacian with a volume constraint.

The first question is: how do we calculate the eigenvalues of a polygon? I adapted a variant of the method of fundamental solutions (see the general 2D case here) for the polygonal case. The method of fundamental solutions consists in finding a function which already satisfies the equation {-\Delta u=\lambda u} on the whole plane, and see that it is zero on the boundary of the desired shape. We choose the fundamental solutions as being the radial functions {\Phi_n^\omega(x)=\frac{i}{4}H_0(\omega|x-y_n|)} where {y_1,..,y_m} are some well chosen source points and {\omega^2=\lambda}. We search our solution as a linear combination of the functions {\Phi_n}, so we will have to solve a system of the form

\displaystyle \sum \alpha_i \Phi_i^\omega(x) =0 , \text{ on }\partial \Omega

in order to find the desired eigenfunction. Since we cannot solve numerically this system for every {x \in \partial \Omega} we choose a discretization {(x_i)_{i=1..m}} on the boundary of {\Omega} and we arrive at a system of equations like:

\displaystyle \sum \alpha_i \Phi_i^\omega(x_j) = 0

and this system has a nontrivial solution if and only if the matrix {A^\omega = (\Phi_i^\omega(x_j))} is singular. The values {\omega} for which {A^\omega} is singular are exactly the square roots of the eigenvalues of our domain {\Omega}.

Read more…

Eigenvalues – from finite dimension to infinite dimension

December 11, 2013 Leave a comment

We can look at a square matrix {A \in \mathcal{M}_n(\Bbb{R})} and see it as a table of numbers. In this case, matrices {B} and {C} below are completely different:

\displaystyle B=\begin{pmatrix}1.48 & -0.36& -0.12 \\ -1.44 & 1.08 & -0.64 \\ 2.24 & 1.32 & 3.44 \end{pmatrix}, C=\begin{pmatrix}-1& 14 & 6 \\ -1 & 6 & 2 \\ 0.5 & -0.5& 1 \end{pmatrix}

If instead we look at a square matrix as at a linear transformation {f : \Bbb{R}^n \rightarrow \Bbb{R}^n, f(x)=Ax} things change a lot. Since the transformation is arbitrary, it seems normal that {A} does not act in every direction in the same way, and some directions are privileged in the sense that the transformation is a simple dilatation in those special directions, i.e. there exists {\lambda} and a non-zero vector {v} (the direction) such that {Av=\lambda v}. The values {\lambda} and the corresponding vectors {v} are so important for the matrix {A} that they almost characterize it; hence their names are eigenvalue and eigenvector which means own value and own vector (eigen = own in German). It turns out that {B} and {C} above both have the same eigenvalues {1,2,3}, and because they are distinct, both the matrices {B,C} are similar to the diagonal matrix {\text{diag}(1,2,3)} ({X} and {Y} are similar if there exists {P} invertible such that {X=PYP^{-1}}).

Read more…

Agreg 2012 Analysis Part 2

October 16, 2013 Leave a comment

Part 2. Some elements of Spectral Analysis

In this part we prove that the spectrum of a bounded linear operator is non-empty, and we look at the characteristics of the spectrum of a compact operator.

Let {E} be a complex Banach space which is not reduced to {\{0\}}. (it is known that {E'\neq \{0\}}) For {T \in \mathcal{L}(E)} we define {res(T)} as the set of those {\lambda \in \Bbb{C}} such that {\lambda I-T} is bijective, and denote {R_\lambda(T)=(\lambda I-T)^{-1} \in \mathcal{L}(E)}.

We define the spectrum by {\sigma(T)=\Bbb{C} \setminus res(T)}. In particular, if {\lambda} is an eigenvalue for {T} we have {\ker(\lambda I-T)\neq \{0\}} and therefore {\lambda \in \sigma(T)}. (but note that {\sigma(T)} may contain elements which are not eigenvalues)

1. Suppose that {\|T\|<1}. Prove that {1 \in res(T)} and {(I-T)^{-1}=\sum_{k=0}^\infty T^k}.

2. Prove that if {|\lambda |> \|T\|} then {\lambda \in res(T)} and

\displaystyle \lim_{|\lambda| \rightarrow \infty} \|R_\lambda(T)\|=0.

3. Prove that {res(T)} is an open set in {\Bbb{C}} and for every {x \in E,\ell\in E'} the application {\phi : \lambda \mapsto \ell(R_\lambda(T)x)} is analytic in a neighborhood of any point {\lambda_0 \in res(T)}.

4. Deduce that for every {T \in \mathcal{L}(E)}, {\sigma(T)} is a non-void and compact.

Read more…

Agreg 2012 Analysis Part 1

October 14, 2013 Leave a comment

Part 1. Finite dimension

The goal is to prove the following theorem:

Theorem 1. Let {A \in M_n(\Bbb{R})} be a square matrix with non-negative coefficients. Suppose that for every {x \in \Bbb{R}^n\setminus \{0\}} with non-negative coordinates, the vector {Ax} has strictly positive components. Then

  • (i) the spectral radius {\rho = \sup \{ |\lambda | : \lambda \in \Bbb{C} \text{ is an eigenvalue for }A\}} is a simple eigenvalue for {A};
  • (ii) there exists an eigenvector {v} of {A} associated to {\rho} with strictly positive coordinates.
  • (iii) any other eigenvalue of {A} verifies {|\lambda|<\rho};
  • (iv) there exists an eigenvector of {A^T} associated to {\rho} with strictly positive components.

1. Consider {(w_1,..,w_n) \in \Bbb{C}^n} such that {|w_1+..+w_n|=|w_1|+...+|w_n|}. Prove that for distinct {j,l \in \{1,..,n\}} we have {\text{re}(\overline{w_j}w_l)=|w_j||w_l|}. Deduce that there exists {\theta \in [0,2\pi)} such that {w_j=e^{i\theta}|w_j|,\ j=1..n}.

2. Prove that the coefficients of {A} are strictly positive.

3. For {z \in \Bbb{C}^n} we denote {|z|=(|z_1|,..,|z_n|)}. Prove that {A|z|=|Az|} if and only if there exists {\theta \in [0,2\pi)} such that {z_j=e^{i\theta}|z_j|,\ j=1..n}.

4. Denote {\mathcal{C}= \{x \in \Bbb{R}^n : x_i \geq 0, i=1..n\}}. Consider {x \in \mathcal{C}} and denote {e=(1,1,..,1) \in \Bbb{R}^n}. Prove that

\displaystyle 0 \leq (Ax|e)\leq (x|e)\max_j \sum_{k=1}^n a_{kj}.

5. Denote {\mathcal{E}= \{ t \geq 0 : \text{ there exists }x \in \mathcal{C} \setminus \{0\} \text{ such that } Ax-tx \in \mathcal{C}\}}. Prove that {\mathcal{E}} is an interval which does not reduces to {\{0\}}, it is bounded and closed.

6. Denote {\rho=\max \mathcal{E}>0}. Prove that if {x \in \mathcal{C}\setminus \{0\}} verifies {Ax-\rho x \in \mathcal{C}} then we have {Ax=\rho x}. Deduce that {\rho} is an eigenvalue of {A} and that for this eigenvalue there exists an eigenvector {v} with coordinates strictly positive.

7. Consider {z \in \Bbb{C}^n}. Prove that {Az=\rho z} and {(z|v)=0} implies {z=0}. Deduce that {\ker(A-\rho I)=\text{span}\{v\}} and every other eigenvalue of {A} verifies {|\lambda | <\rho}.

8. Prove that every eigenvector of {A} which has positive coordinates is proportional to {v}.

Read more…

IMC 2013 Problem 1

August 8, 2013 Leave a comment

Problem 1. Let {A} and {B} be real symmetric matrices with all eigenvalues strictly greater than {1}. Let {\lambda} be a real eigenvalue of matrix {AB}. Prove that {|\lambda| >1}.

Read more…

Categories: Algebra, Olympiad Tags: , ,

Eigenvalues via Fundamental Solutions

February 8, 2013 2 comments

Eigenvalue problems like

\displaystyle \begin{cases} -\Delta u =\lambda u & \text{ in }\Omega \\ \hfill u=0 & \text{ on }\partial \Omega \end{cases}

can be solved numerically in a variety of ways. Probably the best known one is the finite element method. I will present below the sketch of an algorithm which does not need meshes, and when implemented correctly, can decrease computational costs.

The idea of Fundamental Solution first appeared in the 60s and was initially used to find solutions of the Laplace equation in a domain. It later was extended to more general equations and eigenvalue problems. The method uses (as the title says) some particular fundamental solutions of the studied equation to create an approximation of the solution as a linear combination of them. The advantage is that the fundamental solutions are sometimes known in analytic form, and the only thing that remains to do is to find the optimal coefficients in the linear combination. A detailed exposure of the method can be found in the following article of Alvez and Antunes.

Read more…

First Dirichlet eigenvalue is simple for connected domains

January 24, 2013 2 comments

Suppose {\Omega \subset \Bbb{R}^N} is a connected open set and consider the first two eigenvalues of the Laplace operator with Dirichlet boundary conditions {\lambda_1(\Omega),\lambda_2(\Omega)}. Then {\lambda_1(\Omega)<\lambda_2(\Omega)}.

Read more…

%d bloggers like this: