## Eigenvalues via Fundamental Solutions

Eigenvalue problems like

can be solved numerically in a variety of ways. Probably the best known one is the finite element method. I will present below the sketch of an algorithm which does not need meshes, and when implemented correctly, can decrease computational costs.

The idea of Fundamental Solution first appeared in the 60s and was initially used to find solutions of the Laplace equation in a domain. It later was extended to more general equations and eigenvalue problems. The method uses (as the title says) some particular fundamental solutions of the studied equation to create an approximation of the solution as a linear combination of them. The advantage is that the fundamental solutions are sometimes known in analytic form, and the only thing that remains to do is to find the optimal coefficients in the linear combination. A detailed exposure of the method can be found in the following article of Alvez and Antunes.

In the case of the eigenvalue problem, the fundamental solutions are chosen to be solutions of the Helmholtz equation where is the Dirac delta distribution. In the 2D case we can take where is the first Hankel function.

In order to make the approximation we choose points on and another points outside . The outer points will be the *source points* for the corresponding Helmholtz equation with , and the coefficients will be chosen such that the values of the function calculated in are zero. (it is worth noting that other boundary conditions may also be considered)

The choice of the families of points is important, since it can help make results better or worse for strange domains which have corners or cracks. In this case, are chosen uniformly on the boundary of while is chosen on the exterior normal corresponding to such that is equal to a parameter which may vary so that none of the points ends up in the interior of .

The approximated solution will be of the form

Note that to be able to solve the eigenvalue problem, we need first to find the eigenvalues, and after that the eigenfunctions. The eigenvalues can be found in the following way: the above system needs to be solved with not-all-zero solutions, and that forces the matrix to be singular.

To find the values of where the matrix has zero determinant, we may change the problem into finding the singularities of the function , because these are more obvious from a numerical point of view than the zeros of the determinant.

I present below the graph of this function for the unit disk, together with a list of values found using the mpspack. (note that the actual values are the squares of the values presented below)

It can be seen that the singularities on the graph correspond to the actual square roots of the eigenvalues and so this method is quite accurate. I will come back with more details on *finding* the actual values (not just viewing them) and finally, finding the eigenfunctions.

Make sure you have exponential convergence in your series expansion, disentangle your geometrically degenerate eigenvalues, use enough digits in your calculations, and make sure your points are a little crowded near vertices … and you don’t need to take the log. Finding the roots in the determinant (eigenvalues) are then very straightforward. Increment N and see if you can get it to alternate above and below an asymptotic value. Oh, make sure your fractional order Bessel are numerically good too. Have a lot of fun.