Archive

Archive for the ‘Optimization’ Category

Project Euler 607

June 11, 2017 2 comments

If you like solving Project Euler problems you should try Problem number 607. It’s not very hard, as it can be reduced to a small optimization problem. The idea is to find a path which minimizes time, knowing that certain regions correspond to different speeds. A precise statement of the result can be found on the official page. Here’s an image of the path which realizes the shortest time:

euler607

 

SIAM 100 digit challenge no 4 – Matlab solution

October 11, 2015 Leave a comment

In 2002 the SIAM News magazine published a list of {10} problems in numerical analysis. The answer to each of the problems should be given with an accuracy of at least {10} digits. And so there came the Hundred-Dollar, Hundred-Digit Challenge. I present below my attempt to solving the {4}th problem in this series, as it deals to finding the global minimum of a highly oscillating function. I confess that my approach is rather simplistic, but solves the problem. First let’s state the problem below:

Problem 4. What is the global minimum of the function

\displaystyle e^{\sin(50x)}+\sin(60e^y)+\sin(70\sin x)+\sin(\sin(80y))-\sin(10(x+y))+\frac{1}{4}(x^2+y^2).

Read more…

Optimal triangles with vertices on fixed circles

October 5, 2015 Leave a comment

Let {x,y,z>0} and suppose {ABC} is a triangle such that there exists a point {O} which satisfies {OA=x}, {OB = y}, {OC = z}. What is the relative position of {O} with respect to the triangle {ABC} such that

a) The area is maximized;

b) The perimeter is maximized.

This was inspired by this answer on Math Overflow.

Read more…

Two more mind games solved using Matlab

March 31, 2014 1 comment

After doing this, solving the Sudoku puzzle with Matlab, I wondered what else can we do using integer programming? Many of the mind games you can find on the Internet can be modelled as matrix problems with linear constraints. I will present two of them, as well as their solutions, below. The game ideas are taken from www.brainbashers.com. Note that in order to use the pieces of code I wrote you need to install the YALMIP toolbox.

1. Three in a row

You have a {2n \times 2n} table {(n \geq 3)} and on each row and column you need to have {n} squares coloured with color {A} and {n} squares coloured with color {B} such that no three squares taken vertically or horizontally have the same color. In the begining you are given the colors of some of the square such that it leads to a unique solution satisfying the given properties. The goal is to find the colors corresponding to each small square starting from the ones given such that the end configuration satisfies the given properties.

Read more…

Simple optimization problem regarding distances

March 27, 2014 Leave a comment

Given {n} points {A_1,...,A_n} in the plane we can study the problem

\displaystyle \min_M MA_1+...+MA_n.

For {n=3} the solution is known and for a triangle which has an angle less than {120^\circ} is the Toricelli (or Fermat) point, which lies at the intersection of the circumcircles of the equilateral triangles built outside the triangle having the sides of the triangle as support.

For {n=4}, if we have a convex quadrilateral, then a simple triangle inequality argument proves that the optimal position for {M} is the intersection of the diagonals.

In general, we cannot precisely say what is the position of the solution. Providing a numerical algorithm which computes the optimal position of {M} is not a hard task, and I will do that in the end of this post.

 

Read more…

Numerical method – minimizing eigenvalues on polygons

December 23, 2013 1 comment

I will present here an algorithm to find numerically the polygon {P_n} with {n} sides which minimizes the {k}-th eigenvalue of the Dirichlet Laplacian with a volume constraint.

The first question is: how do we calculate the eigenvalues of a polygon? I adapted a variant of the method of fundamental solutions (see the general 2D case here) for the polygonal case. The method of fundamental solutions consists in finding a function which already satisfies the equation {-\Delta u=\lambda u} on the whole plane, and see that it is zero on the boundary of the desired shape. We choose the fundamental solutions as being the radial functions {\Phi_n^\omega(x)=\frac{i}{4}H_0(\omega|x-y_n|)} where {y_1,..,y_m} are some well chosen source points and {\omega^2=\lambda}. We search our solution as a linear combination of the functions {\Phi_n}, so we will have to solve a system of the form

\displaystyle \sum \alpha_i \Phi_i^\omega(x) =0 , \text{ on }\partial \Omega

in order to find the desired eigenfunction. Since we cannot solve numerically this system for every {x \in \partial \Omega} we choose a discretization {(x_i)_{i=1..m}} on the boundary of {\Omega} and we arrive at a system of equations like:

\displaystyle \sum \alpha_i \Phi_i^\omega(x_j) = 0

and this system has a nontrivial solution if and only if the matrix {A^\omega = (\Phi_i^\omega(x_j))} is singular. The values {\omega} for which {A^\omega} is singular are exactly the square roots of the eigenvalues of our domain {\Omega}.

Read more…

Distance from a point to an ellipsoid

October 17, 2013 5 comments

Here I will discuss the problem of finding the point which realizes the minimal distance from a given {x_0} to an ellipsoid. First I will look at the theoretical properties which will lead us to {x_0} and then I will present a numerical algorithm to find the nearest point.

This problem is interesting in itself, but has some other applications. Sometimes in optimization problems we need to impose a quadratic constraint, so after a gradient descent we always project the result on the constraint which is an ellipsoid.

Let’s begin with the problem, which can be formulated in a general way like this: we are given {y \in \Bbb{R}^n}, and a matrix {A}, positive semidefinite, which gives an ellipsoid {x^TAx=1}. Without loss of generality assume that {A} is diagonal (if not, we diagonalize it using an orthogonal matrix and work in a new coordinate system). The problem becomes

\displaystyle \min_{x^TAx=1} \|x-y\|^2.

(note that the square is not important here, but simplifies the computations later)

Read more…

%d bloggers like this: