Home > Numerical Analysis > Eigenvalues via Fundamental Solutions

Eigenvalues via Fundamental Solutions


Eigenvalue problems like

\displaystyle \begin{cases} -\Delta u =\lambda u & \text{ in }\Omega \\ \hfill u=0 & \text{ on }\partial \Omega \end{cases}

can be solved numerically in a variety of ways. Probably the best known one is the finite element method. I will present below the sketch of an algorithm which does not need meshes, and when implemented correctly, can decrease computational costs.

The idea of Fundamental Solution first appeared in the 60s and was initially used to find solutions of the Laplace equation in a domain. It later was extended to more general equations and eigenvalue problems. The method uses (as the title says) some particular fundamental solutions of the studied equation to create an approximation of the solution as a linear combination of them. The advantage is that the fundamental solutions are sometimes known in analytic form, and the only thing that remains to do is to find the optimal coefficients in the linear combination. A detailed exposure of the method can be found in the following article of Alvez and Antunes.

In the case of the eigenvalue problem, the fundamental solutions {\Phi_\omega} are chosen to be solutions of the Helmholtz equation {(\Delta+\omega^2)\Phi_\omega=-\delta} where {\delta} is the Dirac delta distribution. In the 2D case we can take {\Phi_\omega =(i/4)H_0^{(1)}(\omega|x|)} where {H_0^{(1)}} is the first Hankel function.

In order to make the approximation we choose {n} points {x_1,..,x_n} on {\partial \Omega} and another {n} points {y_1,..,y_n} outside {\Omega}. The outer points will be the source points for the corresponding Helmholtz equation with {\delta_{y_i}}, and the coefficients will be chosen such that the values of the function calculated in {x_1,..,x_n} are zero. (it is worth noting that other boundary conditions may also be considered)

The choice of the families of points {x_i,y_i} is important, since it can help make results better or worse for strange domains which have corners or cracks. In this case, {x_i} are chosen uniformly on the boundary of {\Omega} while {y_i} is chosen on the exterior normal corresponding to {x_i} such that {|y_i-x_i|} is equal to a parameter {\alpha} which may vary so that none of the points {y_i} ends up in the interior of {\Omega}.

The approximated solution will be of the form

\displaystyle u=\sum_{j=1}^n \alpha_j \Phi_\omega(\bullet -y_j).

Note that to be able to solve the eigenvalue problem, we need first to find the eigenvalues, and after that the eigenfunctions. The eigenvalues can be found in the following way: the above system needs to be solved with not-all-zero solutions, and that forces the matrix {A_\omega=(\Phi_\omega(|x_i-y_j|))_{ij}} to be singular.

To find the values of {\omega} where the matrix has zero determinant, we may change the problem into finding the singularities of the function {\omega \mapsto \log(|\det(A_\Omega)|)}, because these are more obvious from a numerical point of view than the zeros of the determinant.

I present below the graph of this function for the unit disk, together with a list of values found using the mpspack. (note that the actual values are the squares of the values presented below)

It can be seen that the singularities on the graph correspond to the actual square roots of the eigenvalues and so this method is quite accurate. I will come back with more details on finding the actual values (not just viewing them) and finally, finding the eigenfunctions.

\lambda_1= 2.40

\lambda_2= 3.83

\lambda_3= 3.83

\lambda_4= 5.13

\lambda_5= 5.13

\lambda_6= 5.52

\lambda_7= 6.38

\lambda_8= 6.38

\lambda_9= 7.01

\lambda_{10}= 7.01

\lambda_{11}= 7.58

\lambda_{12}= 7.58

\lambda_{13}= 8.41

\lambda_{14}= 8.41

\lambda_{15}= 8.65

\lambda_{16}= 8.77

\lambda_{17}= 8.77

\lambda_{18}= 9.76

\lambda_{19}= 9.76

\lambda_{20}= 9.93

graphdisk1

Advertisements
  1. Bob
    February 26, 2015 at 8:30 am

    Make sure you have exponential convergence in your series expansion, disentangle your geometrically degenerate eigenvalues, use enough digits in your calculations, and make sure your points are a little crowded near vertices … and you don’t need to take the log. Finding the roots in the determinant (eigenvalues) are then very straightforward. Increment N and see if you can get it to alternate above and below an asymptotic value. Oh, make sure your fractional order Bessel are numerically good too. Have a lot of fun.

  1. December 23, 2013 at 11:20 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: