Partitioning a disk with disks

February 7, 2021 2 comments

Suppose that disjoint disks {D_i} with radii {r_i} are placed inside the unit disk {D} such that {\sum_{i=1}^\infty r_i^2=1}. Prove that {\sum_{i=1}^\infty r_i} diverges

Details about this problem and the motivation behind it can be found in the article:An infinite packing theorem for spheres by Oscar Welser, Proc. AMS, Vol. 11, No. 2 (Apr., 1960)

Another way of formulating this problem is the following: a sequence of disks {D_n} which cover {D} exactly (up to a zero-measure set) must have infinite total perimeter.

Read more…

A rational harmonic series

January 9, 2021 1 comment

Can the rational numbers in the interval {[0,1]} be enumerated as a sequence {(q_n)_{n\geq 1}} in such a way that the series {\sum_{n=1}^\infty q_n/n} is convergent?

source: 17th University of Michigan Undergraduate Mathematics Competition 2000, you can also take a look at Bicycle or unicycle? Problem 63

Read more…

Ten bottles of wine – Python code

January 4, 2021 Leave a comment

In a previous post I presented a nice geometry problem which stated that if some bottles are stacked in the box such that the bottom layer extremities touch the sides of the box and each bottle on the next rows touches two bottles from the row below, then the top bottle will be centered in the box.

Below you will find a python code which allows you to produce pictures of stacks of bottles in a box for arbitrary numbers of rows (you may need to modify the code slightly for more than 15 rows).

The code for drawing the bottles goes like this:

import numpy as np
import matplotlib.pyplot as plt

N = 15          # number of rows
r = 1            # radius

skips = np.zeros(N)

# skips between bottom bottles

skips = [0,1.6,0.4,1.2,0.3,0.6,0.7,0.8,0.9,1,0.4,0.4,0.6,0.7,0.9]*r
skips = skips[0:N]
w = N*2*r+np.sum(skips)    # total width

skips = np.cumsum(skips)   # compute cumulative skips

plotw = 0.05*w             # compute space for plot borders

def drawing(N,r,skips):
    lin = np.linspace(0,N,N+1)[0:-1]
    xs = 2*r*lin+skips+r    # coordinates of the bottom bottles
    ys = np.zeros(N)
    for i in range(0,N):
        # draw the bottles
        circle1 = plt.Circle((xs[i],ys[i]),r,color='b',fill=False)
    # compute the position of the bottles from 
    # the second row up
    for i in range(1,N):
        newxs = np.zeros(N-i)
        newys = np.zeros(N-i)
        for j in range(0,N-i):
            seg = [xs[j+1]-xs[j],ys[j+1]-ys[j]]
            l   = np.linalg.norm(seg)/2
            nor = [-seg[1],seg[0]]
            nor = nor/np.linalg.norm(nor)
            mid = 0.5*np.array([xs[j+1]+xs[j],ys[j+1]+ys[j]])
            nl  = np.sqrt(4*r**2-l**2)
            newxs[j] = mid[0]+nl*nor[0]
            newys[j] = mid[1]+nl*nor[1]
            # draw circles
            circle1 = plt.Circle((newxs[j],newys[j]),r,color='b',fill=False)
        xs = newxs
        ys = newys  
    print("Last x coordinate: ",newxs[0]," Midpoint: ",w/2)
    # box plot 
    botx = np.array([0,w])
    boty = -r*np.ones(2)
    leftx = np.zeros(2)
    lefty = np.array([-r,newys[0]+r])

    rightx = w*np.ones(2)
    righty = np.array([-r,newys[0]+r])
    # some plot settings
    plt.savefig("Bottles.png",bbox_inches='tight',pad_inches = -0.1,dpi=300)
    return 0


Here is an output obtained using this code:

15 rows of bottles: the top one is still centered

The code for drawing the segments between the centers which illustrate the rhombic pattern and which indicate why the top bottle should be centered goes like this:

def drawingRhombi(N,r,skips):
    lin = np.linspace(0,N,N+1)[0:-1]
    xs = 2*r*lin+skips+r   
    ys = np.zeros(N)
    for i in range(0,N-1):
    for i in range(1,N):
        newxs = np.zeros(N-i)
        newys = np.zeros(N-i)
        for j in range(0,N-i):
            seg = [xs[j+1]-xs[j],ys[j+1]-ys[j]]
            l   = np.linalg.norm(seg)/2
            nor = [-seg[1],seg[0]]
            nor = nor/np.linalg.norm(nor)
            mid = 0.5*np.array([xs[j+1]+xs[j],ys[j+1]+ys[j]])
            nl  = np.sqrt(4*r**2-l**2)
            newxs[j] = mid[0]+nl*nor[0]
            newys[j] = mid[1]+nl*nor[1]

        xs = newxs
        ys = newys   
    print("Last x coordinate: ",newxs[0]," Midpoint: ",w/2)
    plt.savefig("Rhombi.png",bbox_inches='tight',pad_inches = -0.1,dpi=300)
    return 0


Here is an output:

The rhombic pattern with symmetry: this shows that the top bottle is exactly centered in the box.
Categories: Uncategorized

Ten Bottles of Wine

December 17, 2020 Leave a comment

Ten identical bottles of wine are stacked up in a bin as shown in the Figure below. Show that the top bottle is centered between the sides of the bin.

Bicycle or Unicycle?, D. Velleman, S. Wagon, Problem 3

One of the nice benefits of being a part of Mathematical Reviews is being able to use some points in order to buy books edited by the American Mathematical Society. The book referenced above is very nice and it contains many puzzle style problems with detailed solutions and nice pictures. The problem at hand is surely intriguing, since the hypotheses are very simple and the conclusion is beautiful.

I am not going to try and write a complete solution here. I will just give you a hint and show you a picture which should help you get to the solution yourself. I wrote a python code which can show a variety of configurations, notably, it can generate such pictures for more than four layers of bottles. The conclusion is still the same: the top bottle is centered, and the proof can be obtained following the same lines.

And here is an example for 10 layers of bottles!

Categories: Geometry, puzzles Tags: ,

Continuous functions on the 2-sphere

December 3, 2020 Leave a comment

The structure of a continuous function defined on the circle with real values is quite intuitive. No one is surprised by the following statement:

A continuous function {f : \Bbb{S}^1 \rightarrow \Bbb{R}} satisfies the following property: there exists {\theta \in \Bbb{S}^1} such that {f(\theta)=f(-\theta)}.

For a proof one simply should take the function {g(\theta) = f(\theta)-f(-\theta)} and notice that {g(-\theta)=-g(\theta)}. Therefore, going from {-\theta} to {\theta} on the circle, the function {g} should pass through zero, which implies that {f} takes the same value on two antipodal points. The following theorems involving the {2} dimensional sphere are less intuitive and it is not immediate to give a proof. Nevertheless, they have some cool applications.

Theorem 1. (Borsuk-Ulam) Let {f: \Bbb{S}^2\rightarrow \Bbb{R}^2} be a continuous function. Then there exists {x \in \Bbb{S}^2} such that {f(x)=f(-x)}.

Note that the proof strategy from the one-dimensional case does not work. The two coordinates of {f(x)-f(-x)} should vanish simultaneously, and it is not straightforward to see why this should happen. Two other similar results are shown below.

Theorem 2. (Kakutani) Let {f: \Bbb{S}^2\rightarrow \Bbb{R}} be a continuous function. Then there exist three points {x_1,x_2,x_3} such that {Ox_1,Ox_2,Ox_3} are mutually orthogonal and {f(x_1)=f(x_2)=f(x_3)}.

Theorem 3. (Dyson) Let {f: \Bbb{S}^2\rightarrow \Bbb{R}} be a continuous function. Then there exist {x_1,x_2,x_3,x_4} vertices of a square centered at the origin such that {f(x_1)=f(x_2)=f(x_3)=f(x_4)}.

We will not try to prove any of these results, since the techniques involved are not always elementary. However, let’s list a short list of cool consequences.

Consequence 1. At any moment in time there exists a place on Earth for which the temperature and the atmospherical pressure is the same as on the opposite side. It is enough to apply Borsuk-Ulam’s theorem for the function which associates to each point the temperature and the pressure, assuming they vary continuously.

Consequence 2. Let {K} be a convex three dimensional body. Then there exists a cube circumscribed to {K} (i.e. every side touches {K}).

For each direction {x \in \Bbb{S}^2} it is possible to associate the distance between the two tangent planes to {K}. This is a continuous function. Applying Kakutani’s result we find that there are three mutually orthogonal directions for which the distances between the tangent planes to {K} orthogonal to these directions are the same. This gives the desired cube circumscribed to {K}. It is possible to generalize this to an arbitrary parallelipiped by using some other variants of Kakutani’s result.

Consequence 3. Let {K} be a three dimensional body which is star-shaped with respect to the origin. Then there exist four points on its boundary which are vertices of a square.

In order to prove this, consider the function {f} which is the distance between the origin and a point of {K} in the direction {x \in \Bbb{S}^2}. This is a continuous function and Dyson’s theorem says that there are two orthogonal diameters in {\Bbb{S}^2} on which this function takes the same value. These will be the vertices of the desired square.

Shapes with minimal area circumscribed around all bodies of constant width

November 26, 2020 Leave a comment

In a previous post I talked about a proof of the Blaschke-Lebesgue theorem (that the Reuleaux triangle is the two dimensional constant width shape with the minimal area). The proof was quite surprising, as it gave a lower bound that was exactly attained for the Reuleaux triangle.

Looking closely at the proof one can deduce the following interesting result:

Among all convex, centrally symmetric domains containing all shapes of a given constant width {w} the regular hexagon of width {w} has the smallest area.

To prove this result is enough to recall the definition of the mixed area of two convex shapes in the plane given by

\displaystyle A(K_1+K_2) = A(K_1)+2A(K_1,K_2)+A(K_2).

Note that {A(K_1,K_2)} is monotone in each one of its arguments.

Then given a constant width shape {B} of width {1}, the shape {B+(-B)} is the unit disk of area {\pi}. Moreover, there exists a hexagon of width one {H} circumscribed to {B}. This gives

\displaystyle \pi = A(B+(-B)) = 2A(B)+2A(B,-B) \leq 2A(B)+2A(H).

This gives {A(B) \geq (\pi-A(H))/2}, and the equality is attained for the Reuleaux triangle.

Suppose now that there exists another shape {H'} which contains all shapes of constant width equal to {1} and {A(H')<A(H)}. Repeating the above argument we would also get

\displaystyle A(B) \geq (\pi-A(H'))/2>(\pi-A(H))/2.

However, plugging the Reuleaux triangle in place of {B} in the above inequality gives a contradiction.

Therefore, among all shapes containing copies of all constant width domains in dimension two, the regular hexagon has the minimal area.

The paper Sets of constant width by G.D. Chakerian gives more details about the proof shown above, as well as some ideas about the three dimensional case, which is still an open problem!

Minimal squared sum minus sum of squares

October 10, 2020 Leave a comment

a) Let {x_1,...,x_n} be real numbers such that {\sum_{i=1}^n |x_i|\leq 1}. Then

\displaystyle \left(\sum_{i=1}^n x_i\right)^2-\sum_{i=1}^n x_i^2 \geq -\frac{1}{2}.

b) Prove that if {\sum_{i=1}^n x_i^2 = \sum_{i=1}^n y_i^2 = 1} then

\displaystyle \left(\sum_{i=1}^n x_iy_i\right)-\sum_{i=1}^n x_i^2y_i^2 \geq -\frac{1}{2}.

Solution: a) Denote with {s_+} and {s_-} the sum of positive and negative terms among the {x_i}:

\displaystyle s_+ = \sum_{x_i\geq 0} x_i,\ s_- = -\sum_{x_i<0} x_i.It is obvious that {\sum_{i=1}^n x_i^2 \leq s_+^2+s_-^2} (it is enough to expand the sums). Moreover, the equality is attained here only if {s_+} and {s_-} contain at most one non-zero term.

Therefore, if {(s_+-s_-)^2-s_+^2-s_-^2\geq -\frac{1}{2}} the original inequality would be proved. This is equivalent to {s_+s_- \leq \frac{1}{4}}. But {s_++s_- = \sum_{i=1}^n |x_i|\leq 1}, which directly implies {s_+s_-\leq \frac{1}{4}}.

Let us now discuss the equality case. Equality holds if and only if {s_+s_-=1/4}. Since {1 \geq (s_++s_-)^2 \geq 4s_+s_-=1} it follows that {s_+=s_-=1/2}. Moreover, equality should also hold in

\displaystyle \sum_{i=1}^n x_i^2 = s_+^2+s_-^2.This means that all but one elements in {s_+} and in {s_-} are zero, which shows that the equality case is attaned for {(1/2,-1/2,0,0...,0)}.

b) It is enough to apply a) for {x_i : = x_iy_i} noting that

\displaystyle (\sum_{i=1}^n |x_iy_i|)^2 \leq \sum_{i=1}^n x_i^2\sum_{i=1}^n y_i^2=1.

Source: The solution here was inspired from the following MathOverflow answer: link.

Categories: Algebra, Inequalities Tags:

Visualizing isosurfaces – Python and Mayavi

October 7, 2020 Leave a comment

Recently I needed to plot some isosurfaces for some 3D unstructured tetrahedral meshes. I had the tetrahedral mesh of some 3D set and a P1 element function defined via the values at the nodes of this mesh. The objective was to visualize some iso surface of the finite element function.

Although this is quite a standard task, it was difficult for me to find the proper solution. Matplotlib did not allow me to interact with the 3D plots, so I decided to look into mayavi. There were a lot of answers on the web concerning isosurfaces on structured meshes (obtained with meshgrids like in Matlab), but that was not what I wanted. There were even some answers where the data from the unstructured mesh was somehow interpolated on a structured mesh and then the isosurface was computed. This is surely un-necessary, since the fact that my mesh was made of tetrahedrons makes it easy to see where the isosurface lies: just look for tetrahedrons in which the values at the vertices are on different sides of the iso-surface value. Fortunately, I did not give up and managed to find how to do this in Mayavi. I found an example of code in some obscure place in the online documentation which did exactly that and I present the code below.

I also had a mesh in the .mesh format (output from FreeFEM) which I needed to import into python. For this I used the “meshio” package.

import meshio
import numpy as np
import matplotlib.pyplot as plt

mesh ="CurrentMesh3D.mesh")

mdet = mesh.cells
tetra = mdet[0].data
tri   = mdet[1].data

pts = mesh.points
x = pts[:,0]
y = pts[:,1]
z = pts[:,2]

v = np.loadtxt("")

import mayavi

from mayavi import mlab
from tvtk.api import tvtk

fig = mlab.figure(bgcolor=(1,1,1),size=(400, 350))

tet_type = tvtk.Tetra().cell_type

ug = tvtk.UnstructuredGrid(points=pts)
ug.point_data.scalars = v = "value"
ug.set_cells(tet_type, tetra)
ds = mlab.pipeline.add_dataset(ug)
iso = mlab.pipeline.iso_surface(ds,contours=[0.5,]) = 0.7
iso.contour.number_of_contours = 1

l = mlab.triangular_mesh(x,y,z,tri,color=(0.3,0.3,0.3),

As can be seen in the code above, the isosurface is set with the iso_surface command from mlab.pipeline. Also, it is possible to plot the triangular mesh directly with mlab.triangular_mesh. One result obtained with this code is plotted below.

Isosurface and the surrounding mesh

Of course, the above example can be transformed to work with an unstructured mesh that you already have in python if you just have the tetrahedral information and the values at the vertices. Also you’ll need the triangles making the boundary of the big mesh in order to plot it. You can change the value of the iso surface, the number of isosurfaces, etc by just modifying the appropriate parts of the code.

What I found frustrating is that even though this kind of solution is already implemented, it is not easy to identify in the Mayavi documentation.

Of course, it is also possible to do all of this in ParaView, but I did not manage to find enough example of ParaView scripts in order to learn how to do it. And doing it by hand in ParaView was no fun since I had to do many such pictures and setting them all up to look the same by hand was just too much. (This is again an example of things that are out there, but which are only available to those who know them… Scripting in Paraview seems powerful, but in my case, if I don’t see a large palette of examples I’m unable to learn how to do it)

So hopefully, you’ll find this reference useful if you want to plot an isosufrace of a P1 function on an unstructured mesh in Mayavi!

Challenge: Try to find the place in the Mayavi documentation where a similar example is shown… I did not manage to find the link to put it here even though I saw it two days ago. Search engines did not help. Searching directly will give you examples on meshgrids. I hope I’m wrong and you find it quickly: if you do, please post the link in the comment section! 🙂

IMO 2020 – Problem 2

September 30, 2020 Leave a comment

Problem 2. The real numbers {a,b,c,d} are such that {a \geq b \geq c \geq d>0} and {a+b+c+d=1}. Prove that

\displaystyle (a+2b+3c+4d)a^ab^bc^cd^d<1.

Hints: This problem is not difficult if one knows a bit of real analysis. The function {x \mapsto x^x} is convex and can be extended continuously to {[0,1]} noting that {\lim_{x \rightarrow 0} x^x=1}. Therefore, the function {(a,b,c,d) \mapsto (a+2b+3c+4d)a^ab^bc^cd^d} is can be extended continuously to {[0,1]^4} and it attains its minimum and maximum values. Moreover, the constraint {a+b+c+d=1} defines a compact set inside {[0,1]^4} so there exists a minimum and a maximum for this function on the given domain. If at the maximum some of the variables should be {0}, the restriction {a,b,c,d} would make the inequality in the hypothesis to be sharp, but strict.

Having no other idea than to go ahead with the analysis, a natural idea is to see what happens when perturbing two of the variables while maintaining the constraint. First note that the case where {a=b=c=d=1/4} gives the value {10/16} which is obviously less than {1}. Taking {b,c,d \rightarrow 0} gives values as close to {1} as we want, so this the maximum is surely not attained when all variables are equal. So we may well assume that at the maximum there exist two variables, say {a} and {b} such that {a>b}.

Then, change the variables {a} and {b} to {a-t,b+t}, with {t>0}, which preserves the constraint. Consider the function

\displaystyle f: t \mapsto (a+2b+3c+4d+t)(a-t)^{(a-t)}(b+t)^{(b+t)}c^cd^d.

The derivative of this function is

\displaystyle (a-t)^{(a-t)}(b+t)^{(b+t)}c^cd^d[1+(a+2b+3c+4d+t)[\ln(b+t)-\ln(a-t)]].

It is not difficult to see that the function above is increasing with respect to {t}. Therefore, the function {f} is convex and attains its maxima at the extrema of the function are attained when {a-t} and {b+t} are as far away as possible. Therefore, {(a,b,c,d)} is not a maximum for the above function when {a<1}. A similar argument works when perturbing other pairs of points. Therefore the maximal value is attained when {a=1} and {b=c=d=0} and this maximal value is {1}. Since the hypothesis clearly states {b,c,d>0}, it follows that the upper bound is indeed {1} but is not attained.

IMO 2020 – Problem 1

September 24, 2020 Leave a comment

Problem 1. Consider the convex quadrilateral {ABCD}. The point {P} is in the interior of {ABCD}. The following ratio equalities hold:

\displaystyle \angle PAD : \angle PBA : \angle DPA = 1:2:3 = \angle CBP : \angle BAP : \angle BPC.

Prove that the following three lines meet in a point: the internal bisectors of angles {\angle ADP} and {\angle PCB} and the perpendicular bisector of segment {AB}.

Solution: The key to solving this problem is to understand that what is important is not the quadrilateral {ABCD}, but the triangle {PAB}. Also, note the interesting structure of the triangles {DPA} and {CPB} which have one angle which is three times the other. Also, note that the angle {\angle DAP} is half the angle {\angle ABP}. Therefore, it is possible to build a cyclic quadrilateral here, by considering the point {X} on {AD} such that {\angle XPA = \angle DAP}. (make a figure to understand the notations).

Why is this construction useful for our problem? If you look now in the triangle {XPD} you see that the angles {\angle X} and {\angle P} are equal. Therefore, the angle bisector of {\angle ADP} is the perpendicular bisector of {XP}, which goes through the circumcenter of the cyclic quadrilateral {AXPB}. Note that this is exactly the circumcenter of {ABP}.

Now, repeating the same argument in the triangle {BPC}, we can see that the two angle bisectors in the problem all go through the circumcenter of the triangle {APB}, which obviously lies on the perpendicular bisector of {AB}.

As a conclusion, what helped solve this problem was understanding the structure of the triangles with one angle three times the other, and transforming the angle bisector into something more useful, like a perpendicular bisector which goes through a circumcenter.

IMO 2020 Problems

September 23, 2020 Leave a comment

Problem 1. Consider the convex quadrilateral {ABCD}. The point {P} is in the interior of {ABCD}. The following ratio equalities hold:

\displaystyle \angle PAD : \angle PBA : \angle DPA = 1:2:3 = \angle CBP : \angle BAP : \angle BPC.

Prove that the following three lines meet in a point: the internal bisectors of angles {\angle ADP} and {\angle PCB} and the perpendicular bisector of segment {AB}.

Problem 2. The real numbers {a,b,c,d} are such that {a \geq b \geq c \geq d>0} and {a+b+c+d=1}. Prove that

\displaystyle (a+2b+3c+4d)a^ab^bc^cd^d<1.

Problem 3. There are {4n} pebbles of weights {1,2,3,...,4n}. Each pebble is coloured in one of {n} colours and there are four pebbles of each colour. Show that we can arrange the pebbles into two piles so that the following two conditions are both satisfied:

  • The total weights of both piles are the same.
  • Each pile contains two pebbles of each colour.

Problem 4. There is an integer {n>1}. There are {n^2} stations on a slope of a mountain, all at different altitudes. Each of two cable car companies, {A} and {B}, operaters {k} cable cars; each cable car provides a transfer from one of the stations to a higher one (with no intermediate stops). The {k} cable cars of {A} have {k} different starting points and {k} different finishing points, and a cable car which starts higher also finishes higher. The same conditions hold for {B}. We say that two stations are linked by a company if one can start from the lower station and reach the higher one by using one or more cars of that company (no other movements between stations are allowed). Determine the smallest positive {k} for which one can guarantee that there are two stations that are linked by both companies. 

Problem 5. A deck of {n>1} cards is given. A positive integer is written on each card. The deck has the property that the arithmetic mean of the numbers on each pair of cards is also the geometric mean of the numbers on some collection of one or more cards.

For which {n} does it follow that the numbers on all cards are all equal? 

Problem 6. Prove that there exists a positive constant {c} such that the following statement is true:

Consider an integer {n>1}, and a set {\mathcal S} of {n} points in the plane such that the distance between any two different points in {\mathcal S} is at least {1}. It follows that there is a line {\ell} separating {\mathcal S} such that the distance for any point of {\mathcal S} to {\ell} is at least {c n^{1/3}}.

(A line {\ell} separates a set of points {\mathcal S} if some segment joining two points in {\mathcal S} crosses {\ell}.) 

Note. Weaker results with {cn^{1/3}} replaced with {cn^\alpha} may be awarded points depending on the value of the constant {\alpha>1/3}.


IMC 2020 – Day 1 – Hints for Problem 4

September 15, 2020 Leave a comment

Problem 4. A polynomial {p} with real coefficients satisfies the equation {p(x+1)-p(x) = x^{100}} for all {x \in \Bbb{R}}. Prove that {p(1-t)\geq p(t)} for {0\leq t\leq 1/2}.

(proposed by Daniil Klyuev, St. Petersburg State University) source: 

Solution: This problem is related to Bernoulli polynomials defined by {B_0=1} and

\displaystyle B_n(X+1)-B_n(X) = nX^{n-1} \text{ with } \int_0^1 B_n = 0.It can be seen that the polynomial {p} in the problem verifies the same relation as {B_{101}} and the integral condition only affects the free term. Note that the condition {p(1-t)\geq p(t)} is invariant up to a translation and multiplication by a constant. Therefore, it is enough to prove that {B_{101}} verifies the required inequality. In order to solve the problem try to tackle the following ideas regarding Bernoulli polynomials.

1. Prove that they are uniquely defined by the above condition.

2. Compute the first few polynomials (or search the web for a list). Find the dominating coefficient of {B_n}.

3. Show that {B_n(0)=B_n(1)} for {n \geq 2}.

4. Consider {C_n = B_{n+1}'/(n+1)} and compute {C_n(X+1)-C_n(X)}. Deduce that {B_n' = nB_{n-1}}.

5. Prove that {B_n(X) = (-1)^n B_n(1-X)}. Deduce that for {k \geq 1} we have

\displaystyle B_{2k+1}(0) = B_{2k+1}(1/2) = B_{2k+1}(1) = 0

6. Use an inductive argument and points 3, 4, 5 above to see what the graphs of {B_n} look like on {[0,1]}. This should get you the result. 

Bonus questions: 

7. Show that {B_n(X) = 2^{n-1}(B_n(X/2)+B_n((X+1)/2)}.

8. Show that for {k\geq 1} the polynomial {B_{2k-1}} does not vanish on {(0,1/2)} and that {B_{2k}} vanishes exactly once. (this should be clear when investigating the variations of {B_n}, and noting that each of these polynomials, when differentiated, has the same sign as the previous one).

I must say that this problem semms a bit artificial, and people who have seen Bernoulli polynomials at least once in their life should have had no problem solving it. However, I find that if the right observations and questions are asked, the path to the solution could be enlightened:

  • Is the polynomial {p} unique? If not, what is the kernel of the app {P \mapsto P(X+1)-P(X)}? The inequality to be proved should be invariant when adding some polynomial in the kernel, and this turns out to be true.
  • The fact that {p(x+1)-p(x)} is equal to {x^{100}} is a bit odd. The first question should be what happens when we replace this with {p(x+1)-p(x)=x^k} for some {k \geq 0}?
  • Start with {k} small, see if there is a pattern and then use an inductive argument.
Categories: Analysis Tags: , ,

IMC 2020 Day 1 – Some Hints for Problems 1-2

August 31, 2020 Leave a comment

Problem 1. Let {n} be a positive integer. Compute the number of words {w} (finite sequences of letters) that satisfy all the following requirements: (1) {w} consists of {n} letters, all of them from the alphabet {\{a,b,c,d\}} (2) {w} contains an even number of letters {a} (3) {w} contains an even number of letters {b} (For example, for {n=2} there are {6} such words: {aa, bb, cc,dd,cd} and {dc}.)

(proposed by Armend Sh. Shabani, University of Prishtina)

Hint:  In order to get a formula for the total number of words it is enough to note that the even number 2k of a’s can be distributed in {n\choose 2k} ways among the possible positions, and the even number of 2l b’s can be distributed in {n-2k\choose 2l} ways among the remaining positions. Once the a’s and b’s are there, the rest can be filled with c’s and d’s in 2^{n-2k-2l} ways.  This gives

\displaystyle \sum_{k=0}^{n/2} \sum_{l=0}^{n/2-k}{n\choose 2k}{n-2k\choose 2l}2^{n-2k-2l}

Next, use some tricks regarding expansions of (a+b)^n+(a-b)^n to compress the sum above to 4^{n-1}+2^{n-1}.

Problem 2. Let {A} and {B} be {n\times n} real matrices such that

\displaystyle \text{rank}(AB-BA+I) = 1,

where {I} is the {n\times n} identity matrix. Prove that

\displaystyle \text{tr}(ABAB)-\text{tr}(A^2B^2) = \frac{1}{2}n(n-1).

(proposed by Rustam Turdibaev, V.I. Romanovskiy Institute of Mathematics)

Hint: Every time you hear about a rank 1 matrix you should think of “column matrix times line matrix”. Indeed, writing AB-BA+I = cd^T with c,d column vectors, gives AB-BA = cd^T-I. Moreover, taking trace in the above equality and using the fact that you can perform circular permutations in products in traces, you obtain that tr(cd^T) = d^Tc = n.

Moreover, squaring AB-BA = cd^T-I gives

\displaystyle ABAB+BABA-ABBA-BAAB = d^Tc(cd^T) - 2d^Tc+I

Taking trace above gives the desired result!

IMC 2020 Online – Problems – Day 1

August 28, 2020 Leave a comment

Problem 1. Let {n} be a positive integer. Compute the number of words {w} (finite sequences of letters) that satisfy all the following requirements: (1) {w} consists of {n} letters, all of them from the alphabet {\{a,b,c,d\}} (2) {w} contains an even number of letters {a} (3) {w} contains an even number of letters {b} (For example, for {n=2} there are {6} such words: {aa, bb, cc,dd,cd} and {dc}.)

(proposed by Armend Sh. Shabani, University of Prishtina)

Problem 2. Let {A} and {B} be {n\times n} real matrices such that

\displaystyle \text{rank}(AB-BA+I) = 1,

where {I} is the {n\times n} identity matrix. Prove that

\displaystyle \text{tr}(ABAB)-\text{tr}(A^2B^2) = \frac{1}{2}n(n-1).

(proposed by Rustam Turdibaev, V.I. Romanovskiy Institute of Mathematics)

Problem 3. Let {d \geq 2} be an integer. Prove that there exists a constant {C(d)} such that the following holds: For any convex polytope {K\subset \Bbb{R}^d}, which is symmetric about the origin, and any {\varepsilon \in (0,1)}, there exists a convex polytope {L \subset \Bbb{R}^d} with at most {C(d) \varepsilon^{1-d}} vertices such that

\displaystyle (1-\varepsilon)K \subseteq L \subset K.

(For a real {\alpha}, a set {T\subset \Bbb{R}^d} with nonempty interior is a convex polytope with at most {\alpha} vertices if {T} is a convex hull of a set {X \subset \Bbb{R}^d} of at most {\alpha} points.)

(proposed by Fedor Petrov, St. Petersburg State University)

Problem 4. A polynomial {p} with real coefficients satisfies the equation {p(x+1)-p(x) = x^{100}} for all {x \in \Bbb{R}}. Prove that {p(1-t)\geq p(t)} for {0\leq t\leq 1/2}.

(proposed by Daniil Klyuev, St. Petersburg State University)


Construct Pythagorean triangles with given constraints

August 24, 2020 Leave a comment

This post will present some ideas related to the generation of all Pythagorean triangles satisfying a certain criterion, from an algorithmic point of view. Of course, there are infinitely many such triangles (integer sided with a right angle). Denoting by {b} and {c} the edges adjacent to the right angle and with {a} the hypothenuse, we have the classical relation

\displaystyle b^2+c^2=a^2.

When {a,b,c} are all integers, it is possible to give a more precise result regarding the possible values of {a,b,c}. Indeed, for any such triangle, there exist integers {k,m,n} with {m>n}, {m,n} not both odd and coprime such that

\displaystyle a = k(m^2+n^2), b = k(m^2-n^2), c = 2kmn.

These formulas are attributed to Euclid. The case {k=1} corresponds to {a,b,c} coprime and such a triangle is caled “primitive”. What is important to note is that this formula generates ALL pythagorean triangles exactly once, and this allows us to solve the following two questions:

1. Generate all pythagorean triangles with edges {\leq N}.

2. Generate all pythagorean triangles with {b = N} or {c=N}.

3. Generate all pythagorean triangles with {a = N}.

Now let’s answer these questions one at a time:

1. In order to generate pythagorean triangles with edges {\leq N} we need to loop over all {m,n}, coprime, not both odd with {m>n} such that {m^2+n^2\leq N} and for each primitive triangle, add to the list all its “multiples” by looking at the appropriate {k}. One possible algorithm is

  • set an empty list {L}
  • loop for {m=2} to {\sqrt{N/2}}
    • loop for {n=1} to {m-1}, {\gcd(m,n)=1}, {m,n} not both odd
    • loop for {k=1} to {N/(m^2+n^2)} and add the triangle {[2kmn,k(m^2-n^2),k(m^2+n^2)]} to the list {L}
  • in the end {L} will contain the desired triangles

2. In this question one of the two legs {b} or {c} should be equal to {N}. The only difficulty here is that we need to loop on the divisors of {N}. Indeed, we have two cases:

  • {N=2kmn}: for all divisors {k} of {N}, we should find the possible factorizations {N/(2k) = mn} with {m,n} not both odd and coprime, and then add the corresponding triangle to the list.
  • {N = k (m^2-n^2) = k(m+n)(m-n)}: find all factorizations {N/k = (m+n)(m-n)} and check again that {m,n} obtained are coprime and not both odd.

3. In this case {a = k(m^2+n^2)}. Therefore, one should loop on all divisors {k} of {N} and in each case solve

\displaystyle N/k = m^2+n^2

where {m>n}, {m,n} are coprime, not both odd. This can be done again with a loop.

These tools might come in handy when working on Project Euler problems, since often when dealing with integer sided quantities in a triangle, things can be reduced to pythagorean triangles. When you reach this step, it is enough to loop on these triangles and perform the requested operations.

Integer sided triangle and IA, IB and IC integers

August 3, 2020 Leave a comment

Let {ABC} be a triangle whose side lengths {AB,BC,CA} are positive integers. Denote by {I} the incenter of the triangle {ABC} and suppose also that the segments {IA,IB,IC} have integer lengths. Prove that the inradius of the triangle {ABC} is an integer.

Solution: Denote by {D,E,F} the projections of the incenter on {BC,CA,AB}, respectively. Use the classical notation {a=BC,b=CA,c=AB} for the lengths of the sides of the triangle. Moreover, use the notations {x = AE=AF}, {y=BD=BF}, {z=CD=CE}. Using Pythagora’s theorem in triangles determined by {I,A,B,C,D,E,F} we obtain

\displaystyle x^2+r^2 = IA^2, y^2+r^2 = IB^2, z^2+r^2=IC^2.

Moreover, it can be proved that if {s = (a+b+c)/2} then {x = p-a, y = p-b, z=p-c}. Using the hypothesis, it follows that {2x,2y,2z} are integers. Using this and the Pythagora’s relations above we find that {4r^2} should be an integer. However, this is not enough to conclude that {r} would also be an integer.

Looking at the triangle {IAB}, denoting {\alpha/2 = \angle IAB} and {\beta/2 = \angle IBA}, {\gamma/2 = \angle ICA} and applying the sine rule we get

\displaystyle \frac{AI}{\sin(\beta/2)} = \frac{c}{\sin((\alpha+\beta)/2)} = \frac{c}{\cos \gamma/2}.

Now note that {\sin \beta/2 = r/IB} and {\cos \gamma/2 = (s-c)/IC}, which gives

\displaystyle \frac{AI}{r/IB} = \frac{c}{(s-c)/IC}.

This complicated relation allows us to deduce that {r} is rational. This means that {r = p/q} with {p} and {q} coprime integers. Moreover, we saw that {4r^2 = 4 p^2/q^2} is an integer, which means that {q^2 |4} or {q|2}. In the end we find that {2r} should always be an integer.

Now, is it possible that {r} is only a half integer, i.e. {2r} is odd? The Pythagora’s relations above imply that

\displaystyle 4x^2+4r^2=4IA^2.

If {2r} is an odd integer then {4r^2} is also odd and of the form {4k+1}. Moreover, {2x} is also an integer, which by the above relation should also be odd, which means that {4x^2} is an integer of the form {4k'+1}. In the end we arrive at

\displaystyle 4k'+1+4k+1 = 4IA^2,

which is a contradiction. Therefore {r} must be an integer!

Sum of floors of multiples of the Golden Ratio

July 25, 2020 Leave a comment

Propose an algorithm for computing

\displaystyle S_n = \sum_{i=1}^n \lfloor i\varphi \rfloor

for {n=10^{16}}, where {\varphi= \frac{\sqrt{5}+1}{2}} is the golden ratio. The sum {S_n} is the sum of the first {n} terms in the so called lower Wythoff sequence.

Solution: Computing a floor function and a multiplication is not complicated, therefore proposing a {O(n)} algorithm for computing {S_n} is trivial. However, such an algorithm is not viable for {n = 10^{16}}.

The path to a much more efficient algorithm goes through the notion of Beatty sequence. For a positive irrational number {r>1} the associated Beatty sequence is {(\lfloor nr\rfloor)_{n\geq 1}}. This notion becomes interesting when noting the following fact. If {s>1} is defined by {1/r+1/s=1} then the Beatty sequences {(\lfloor nr\rfloor)_{n\geq 1}} and {(\lfloor ns\rfloor)_{n\geq 1}} cover all the positive integers. The latter sequence is called the associated complementary Beatty sequence. Two proofs of this fact can be found in the Wikipedia link above.

It is obvious now that {S_n} is just the partial sum of the Beatty sequence associated to {\varphi}. Now let’s see what is the complementary sequence. A brief computation shows that the associated {s} is given by {s=\varphi+1}, which shows that the terms which are not of the form {\lfloor i\varphi\rfloor} are not far from it. In order to see how this can give an efficient algorithm, let’s follow the instructions below:

  • denote by {N = \lfloor n\varphi \rfloor}, the largest term in {S_n}.
  • looking at all the integers up to {N}, in view of the result regarding the Beatty sequences, they are either of the form {\lfloor i\varphi \rfloor} or of the form {\lfloor j(\varphi+1) \rfloor} (in the associated complementary sequence).
  • denote by {J} the largest integer such that {\lfloor j(\varphi+1) \rfloor< N}, which is given by {J = \lfloor (N+1)/(\varphi+1)\rfloor}. Then, it is not difficult to see that {S_n} is the difference between the sum of all positive integers up to {N} and {S_J+1+...+J}.
  • In the end we obtain

    \displaystyle S_n = N(N+1)/2 - S_J-J(J+1)/2.

  • by convention, state that {S_0=0} or {S_1=1}.

The last item above shows that a recursive algorithm can be implemented in order to compute {S_n}. Moreover, since the computation of {S_n} is reduced to the computation of {S_J} where {J = \lfloor (\lfloor n\varphi\rfloor+1)/(\varphi+1)\rfloor\leq n \frac{\varphi}{\varphi+1}+\frac{1}{\varphi+1}}, the algorithm will converge very fast, since the sequence of upper bounds for {J} converges exponentially to zero. Therefore, the algorithm obtained will have complexity {O(\log n)} and will work for extremely large values of {n}, provided the language you are using can handle the expected precision.

The algorithm for computing {S_n} shown above can be used, for example, in computing sums related to elements in the Wythoff table, which can be expressed using Fibonacci numbers and terms in {S_n}.

Other values of {r}, like for example {r=\sqrt{2}} (with {s=2+\sqrt{2}}) lead to other types of sums for which the same type of algorithm can be applied. It is likely that even cases where {s} is not explicitly related to {r} through an integer may be handled through a similar procedure (to be confirmed).

Compute recurrent sequences fast

July 9, 2020 1 comment

In many programming questions the computation of a recurrent sequence might be necessary. One of the famous such sequences is the Fibonacci sequence defined by {F_0=F_1=1} and

\displaystyle F_{n+1}=F_n+F_{n-1},\ n \geq 1.

Writing a code which computes {F_n} for given {n} is a no-brainer, but the complexity may vary much depending on the implementation. As it turns out, not everyone has the right reflexes when writing such a code. In the following I write some algorithmic variants and discuss their speed and complexity. In the end I’ll say a few words on how to compute quickly the {n}-th sequence of any recurrence relation with fixed coefficients and fixed order.

First, remember that it is possible to have an exact formula:

\displaystyle F_n = a\lambda_1^n+b\lambda_2^n

where {\lambda_i} are solutions of {\lambda^2=\lambda+1} and {a} and {b} are found from {F_0} and {F_1}. Note that {\lambda} are irrational, so when looking for an exact formula you might need to be careful for large {n}. This could give you a {O(\log n)} algorithm. The formula is explicit, but you need to compute two exponentials so it’s not really free. Moreover, this shows that values of {F_n} grow exponentially fast so you’ll quickly overflow if you’re not working in arbitrary precision. Moreover, if you need to compute {F_n (\mod p)} for really large {n}, this might not be the simplest approach, since you’re working with irrationals.

1. Recursion. The simplest way to code the computation of {F_n} is using a recursive function. Create a function fibonacci which depending on the value of {n} returns {1} if {n \leq 2} or returns fibonacci({n-1})+fibonacci({n-2}). I remember using coding this when I was a beginner, thinking it was a nice idea… It turns out, however, that this approach has exponential complexity. If you’re not convinced, implement this and try to compute {F_{10}, F_{20}}, etc. You’ll quickly see the exponential cost we’re talking about.

In order to see why, it is enough to note that when computing {F_n} the algorithms needs to compute {F_j} for all {j <n}. Then for each {F_j} one also needs to compute {F_{j'}} for all {j'<j}, etc. The only way this can be efficient if one stores the values already computed. This is called recursion with memoization, and should always be used when coding recursive tasks when there is a risk of computing the same value multiple times. However, the memoization has an {O(n)} memory cost.

2. Iterative sequence. The second approach is more efficient. You only need two variables, {A} and {B} which store {F_n} and {F_{n-1}}. Then at each iteration, store {F_{n}} in a temporary variable, change {A} into {F_{n}+F_{n-1}} and store the value of {F_n} in {B}. It is obvious that this approach costs {O(n)} in execution time and {O(1)} in memory. If you need to store all values of {F_j}, {j<n} you will also have {O(n)} memory cost. As always, when doing this you can go up to {n\approx 10^9} in reasonable computing time (even higher, depending on your hardware, or programming language). However, reaching {n=10^{16}} is generally out of the question.

3. Matrix exponentiation. One interesing approach, which is really fast is rewriting the second order recurrence as a first order matrix recurrence:

\displaystyle \begin{pmatrix} F_{n+1}\\ F_n \end{pmatrix} = \begin{pmatrix} 1& 1 \\ 1& 0 \end{pmatrix} \begin{pmatrix} F_{n}\\ F_{n-1} \end{pmatrix}

This implies that {\begin{pmatrix} F_{n+1}\\ F_n \end{pmatrix} = A^n \begin{pmatrix} F_{1}\\ F_0 \end{pmatrix}} where {A = \begin{pmatrix} 1& 1 \\ 1& 0 \end{pmatrix}}. Therefore, in order to compute {F_n} only a matrix exponential is needed. Using exponentiation by squaring this can be done in {O(\log n)} time. At the end you will obtain an (exact) integer result or a result modulo whatever integer {p} you want. So this is the way to go if you need to compute {F_n} for extremely large {n}.

Please note that you can generalize this to recurrences of the form

\displaystyle x_{n+k+1} = a_{k}x_{n+k} + ... + a_1 x_1.

It suffices to see that

\displaystyle \begin{pmatrix} x_{n+k+1}\\ x_{n+k} \\ ... \\ x_{n+1} \end{pmatrix} = \begin{pmatrix} a_{k} & a_{k-1} & ... & a_2 & a_1 \\ 1 & 0 & ... & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & ... & 1& 0 \end{pmatrix} \begin{pmatrix} x_{n+k}\\ x_{n+k-1} \\ ... \\ x_{n} \end{pmatrix}

so in the end you just need to compute the exponential of a bigger matrix. The complexity stays the same!

What to remember from this post?

  • Don’t use recursion (without memoization) when computing recurrent sequences. You won’t get very far and you’ll waste your ressources!
  • You can compute the {n}-th value in recurrent sequences with constant coefficients with a {O(\log n)} complexity using matrix exponentiation. This works well even for huge {n} when working modulo a fixed integer {p}.

Zero order methods – locating singularities

June 13, 2020 Leave a comment

In my previous post I talked about zero-order optimization methods and why they present a loss of precision when dealing with minimizers of smooth functions. Near such points the function varies too slowly, so it is not possible to attain a precision greater than the square root of the machine precision, based solely on the function values.

However, the situation changes when dealing with non-smooth functions. In particular, when searching for singularities. I asked my students the following question:

In a research article you see the following figure. The author says that the downward pointing spikes correspond to solutions {x_i} of some problem of interest and that they are found using the Golden Search method.


A list of the solutions {x_i} is given and the author claims they are precise up to 12 significative digits. Knowing that the computations were made using a machine precision which can give 16 significative digits, do you believe this claim? Justify your answer.

Many of the students stated that the claim was wrong, since no more than 8 digits could be guaranteed using a zero-order method. However, there were some who realized that locating the spikes, which are local minima of a unimodal function in a small neighborhood, does not enter into the problematic cases for zero-order methods.

Why is that? Well the reason for which zero-order methods are imprecise around a local minimum x^* around which the function is regular, is the fact that the first derivative vanishes and the function behaves like a quadratic function in (x-x^*). This is not the case in the figure above, since the spikes correspond to singularities, where the derivative is not well defined. Moreover, close to the local-minimum singularities the function varies quickly, not slowly, like in the smooth case. As a consequence, zero-order methods will have no problem finding the singularities to the precision to which we can compute the respective function.

In order to test this claim in practice, apply some zero-order optimization algorithm (trisection, Golden Search, Fibonacci search) to the functions:

  • f(x) = \sqrt{|x-0.1|}+0.3 on [-1,1]
  • f(x) = \log |x-1| on [0,2].

Note that the function should be singular both from the left and from the right of the minimum point. Take for example f(x) = \begin{cases} (x-1)^2+1 & x<1 \\ \sqrt{|x-1|}+1 & x\geq 1\end{cases}. This function varies rapidly on the right hand side of 1, but it varies slowly on the left hand side. Therefore the problem we had for smooth functions can still occur.

Conclusion: pay attention to the hypotheses needed to justify the various claims you encounter. Zero-order methods are limited in precision for smooth functions, but they work perfectly fine if the function is non-smooth at its minimum point.

Categories: Uncategorized

Are zero-order optimization methods any good?

May 20, 2020 1 comment

The short answer is yes, but only when the derivative of gradient of the objective function is not available. To fix the ideas we refer to:

  • optimization algorithm: as an iterative process of searching for approximations of a (local/global) minimum of a certain function {f:\Bbb{R}^n \rightarrow \Bbb{R}}
  • zero-order algorithm: an optimization method which only uses function evaluations in order to decide on the next point in the iterative process.

Therefore, in view of the definitions above, zero-order algorithms want to approximate minimizers of a function using only function evaluations; no further information on derivatives is available. Classical examples are bracketing algorithms and genetic algorithms. The objective here is not to go into detail in any of these algortihms, but to underline one basic limitation which must be taken into account whenever considering these methods.

In a zero-order optimization algorithm any decision regarding the choice of the next iterate can be made only by comparing the values of {f} at different evaluation points. For example, look at the classical trisection algorithm for minimizing a unimodal function, i.e. a real function defined on {[a,b]} which is decreasing on {[a,x^*]} and increasing on {[x^*,b]}:

  • given the bracketing {x^* \in [a_i,b_i]} choose the points {x_-=2/3a_i+1/3b_i} and {x_+=1/3a_i+2/3b_i}.
  • compare the values {f(x_-)} and {f(x_+)} in order to decide on the next bracketing interval:if {f(x_-)\leq f(x_+)} then {[a_{i+1},b_{i+1}]=[a_i,x_+]}

    if {f(x_-)\geq f(x_+)} then {[a_{i+1},b_{i+1}]=[x_-,b_i]}

  • stop when the difference {b_i-a_i} is small enough.

One question immediately rises: can such an algorithm reach any desired precision, for example, can it reach the machine precision, i.e. the precision to which computations are done in the software used? To fix the ideas, we’ll suppose that we are in the familiar world of numbers written in double precision, where the machine precision is something like {\varepsilon = 2.2\times 10^{-16}}. I will not go into further details regarding this, since people do whole courses on this. Note that Matlab, Octave and Numpy are well known languages in which the default setup uses this machine precision.

More precisely, the real numbers are stored as floating point numbers and only {16} digits are relevant in the computations. When adding two numbers {a,b} whose ratio {a/b} is smaller than {\varepsilon} the result will be {a+b = b}. This is due to the fact that adding the numbers would require us to shift the decimal point to the same position. However, since the ratio {a/b} is smaller than {\varepsilon}, when shifting the decimal point to the same position, the resulting number will contain zeros on the first {16} or more significant digits in {a}. Therefore the addition will not change any significant digits in {b}.

This issue related to computations done in floating point precision makes the question of comparing {f(x_-)} and {f(x_+)} in the algorithm above pointless when {x_-} and {x_+} are close to the minimum. In order to identify the source of the potential problem let us write the Taylor expansion of the function {f} around the minimum {x^*}. First, note that {f'(x^*)=0} at the minimum, which leaves us with

\displaystyle f(x) = f(x^*)+f'(x^*)(x-x^*)+\frac{1}{2} f''(x^*)(x-x^*)^2+o()=f(x^*)+\frac{1}{2} f''(x^*)(x-x^*)^2+o(),

where {o()} denotes higher order terms. You may note that neglecting higher order terms shows that the function looks like a quadratic function near the optimum. This is a good thing to keep in mind and students should not hold any grudge for professors who illustrate the behavior of optimization algorithms on quadratic functions. As it turns out, any good algorithm needs to behave well for quadratic functions, since every function, near a minimum is eventually quadratic.

Coming back to our previous paragraph, it turns out that the computer won’t be able to tell the difference between {f(x)} and {f(x^*)} if

\displaystyle \frac{1}{2} |f''(x^*)|(x-x^*)^2<\varepsilon |f(x^*)|,

where {\varepsilon} is the machine precision. Therefore, if

\displaystyle |x-x^*|/|x^*| < \sqrt{\varepsilon }\sqrt{\frac{2|f(x^*)|}{|x^*|^2|f''(x^*)|}}

the machine won’t tell the difference between {f(x)} and {f(x^*)}. Since the second factor in the above expression is well defined and positive in most cases, it turns out that we will not be able to decide if we are closer than {\sqrt{\varepsilon}} to the solution {x^*} based only on the values of the function {f}.

Therefore, when using a zero-order algorithms for a function {f} with {x^*,f(x^*),f''(x^*)\neq 0} you cannot decide based only on the function values if you are closer to the optimum {x^*} than the square root of the machine precision {\sqrt{\varepsilon}}. This is quite a limitation, and in practice it should save lots of pointless function evaluations.

When using derivatives or gradients there is no such problem, since we can decide if we are close enough to {x^*} by comparing the derivative {f'(x^*)} to zero, and this comparison is valid up to the precision to which we can compute {f'}.

In conclusion, use zero-order methods only if gradient based methods are out of reach. Zero-order methods are not-only slowly convergent, but may also be unable to achieve the expected precision in the end.

%d bloggers like this: