Sierpinski’s Theorem for Additive Functions – Simplification

1. If ${f}$ is a solution of the Cauchy functional equation which is surjective, but not injective, then ${f}$ has the Darboux property.

2. For every solution ${f}$ of the Cauchy functional equation there exist two non-trivial solutions ${f_1,f_2}$ of the same equation, such that ${f_1}$ and ${f_2}$ have the Darboux property and ${f=f_1+f_2}$.

These two results were proven in this post. The version presented here is a simplified one, identifying exactly what we need in order to obtain the desired results.

January 26, 2014 1 comment

We say that ${f: \Bbb{R} \rightarrow \Bbb{R}}$ is an additive function if

$\displaystyle f(x+y)=f(x)+f(y),\ \forall x,y \in \Bbb{R}.$

1. Prove that there exist additive functions which are discontinuous with or without the Darboux Property.

2. Prove that for every additive function ${f}$ there exist two functions ${f_1,f_2:\Bbb{R} \rightarrow \Bbb{R}}$ which are additive, have the Darboux Property, and ${f=f_1+f_2}$.

The second part is similar to Sierpinski’s Theorem which states that every real function can be written as the sum of two real functions with Darboux property.

(A function ${g:I \rightarrow \Bbb{R}}$ has the Darboux property if for every ${[a,b]\subset I}$, ${g([a,b])}$ is an interval.)

Numerical method – minimizing eigenvalues on polygons

I will present here an algorithm to find numerically the polygon ${P_n}$ with ${n}$ sides which minimizes the ${k}$-th eigenvalue of the Dirichlet Laplacian with a volume constraint.

The first question is: how do we calculate the eigenvalues of a polygon? I adapted a variant of the method of fundamental solutions (see the general 2D case here) for the polygonal case. The method of fundamental solutions consists in finding a function which already satisfies the equation ${-\Delta u=\lambda u}$ on the whole plane, and see that it is zero on the boundary of the desired shape. We choose the fundamental solutions as being the radial functions ${\Phi_n^\omega(x)=\frac{i}{4}H_0(\omega|x-y_n|)}$ where ${y_1,..,y_m}$ are some well chosen source points and ${\omega^2=\lambda}$. We search our solution as a linear combination of the functions ${\Phi_n}$, so we will have to solve a system of the form

$\displaystyle \sum \alpha_i \Phi_i^\omega(x) =0 , \text{ on }\partial \Omega$

in order to find the desired eigenfunction. Since we cannot solve numerically this system for every ${x \in \partial \Omega}$ we choose a discretization ${(x_i)_{i=1..m}}$ on the boundary of ${\Omega}$ and we arrive at a system of equations like:

$\displaystyle \sum \alpha_i \Phi_i^\omega(x_j) = 0$

and this system has a nontrivial solution if and only if the matrix ${A^\omega = (\Phi_i^\omega(x_j))}$ is singular. The values ${\omega}$ for which ${A^\omega}$ is singular are exactly the square roots of the eigenvalues of our domain ${\Omega}$.

IMO 1981 Day 1

Problem 1. Let ${P}$ be a point inside a given triangle ${ABC}$ and denote ${D,E,F}$ the feet of the perpendiculars from ${P}$ to the lines ${BC,CA,AB}$ respectively. Find ${P}$ such that the quantity

$\displaystyle \frac{BC}{PD}+\frac{CA}{PE}+\frac{AB}{PF}$

is minimal.

Problem 2. Let ${1 \leq r\leq n}$ and consider all subsets of ${r}$ elements of the set ${\{1,2,..,n\}}$. Each of these subsets has a smallest member. Let ${F(n,r)}$ denote the arithmetic mean of these smallest numbers. Prove that

$\displaystyle F(n,r)=\frac{n+1}{r+1}.$

Problem 3. Determine the maximum value of ${m^3+n^3}$ where ${m,n \in \{1,2,..,1981\}}$ with ${(n^2-mn-m^2)^2=1}$.

Asymptotic characterization in terms of sequence limits

Suppose ${f:(0,\infty) \rightarrow \Bbb{R}}$ is a continuous function such that for every ${x>0}$ we have

$\displaystyle \lim_{n \rightarrow \infty} f(nx)=0.$

Prove that ${\lim\limits_{x \rightarrow \infty} f(x)=0}$.

the Cantor function and some of its properties

Let’s start by definining the Cantor set. Define ${C_0=[0,1]}$ and ${C_{n+1} = C_n/3 \cup (2/3+C_n/3)}$. At each step we delete the middle third of all the intervals of ${C_n}$ to obtain ${C_{n+1}}$. Note that we obviously have ${C_{n+1} \subset C_{n}}$ (an easy inductive argument) and ${|C_n|=(2/3)^n}$. The sets ${C_n}$ are compact and descending, therefore we can define ${C=\bigcap_{n=0}^\infty C_n}$ which is a compact subset of ${[0,1]}$ with zero measure and it is called the Cantor set.

Since at each step we remove a middle third of all the intervals in ${C_n}$, one way to look at the Cantor set is to look at the ternary representation of the points in it. In the first step, we remove all the elements of ${[0,1]}$ which have ${1}$ on their first position in the ternary representation. In the second step we remove those (remaining) which have ${1}$ on the second position, and so on. In the end we are left only with elements of ${[0,1]}$ which have only digits ${0,2}$ in their ternary representation. Using this representation we can construct a bijection between ${C}$ and ${[0,1]}$ which maps

$\displaystyle x=\sum_{n=1}^\infty \frac{a_n}{3^n} \mapsto \sum_{n=1}^\infty \frac{b_n}{2^n}$

where ${b_n=0}$ if ${a_n=0}$ and ${b_n=1}$ if ${a_n=2}$. This proves that the Cantor set is uncountable.

We can construct the Cantor function ${g:[0,1] \rightarrow [0,1]}$ in the following way. Denote ${R_n}$ the set ${C_n\setminus C_{n+1}}$ (i.e. the set removed in step ${n}$). On ${R_1}$ we let ${g(x)=1/2}$. On ${R_2}$ we have two intervals: on the left one we let ${g(x)=1/4}$ and on the right one we let ${g(x)=3/4}$. We continue like this iteratively, at each step choosing ${g}$ constant on each of the intervals which construct ${R_n}$ such that the constant on an interval is the mean of the values of neighboring interval values.

Eigenvalues – from finite dimension to infinite dimension

We can look at a square matrix ${A \in \mathcal{M}_n(\Bbb{R})}$ and see it as a table of numbers. In this case, matrices ${B}$ and ${C}$ below are completely different:
$\displaystyle B=\begin{pmatrix}1.48 & -0.36& -0.12 \\ -1.44 & 1.08 & -0.64 \\ 2.24 & 1.32 & 3.44 \end{pmatrix}, C=\begin{pmatrix}-1& 14 & 6 \\ -1 & 6 & 2 \\ 0.5 & -0.5& 1 \end{pmatrix}$
If instead we look at a square matrix as at a linear transformation ${f : \Bbb{R}^n \rightarrow \Bbb{R}^n, f(x)=Ax}$ things change a lot. Since the transformation is arbitrary, it seems normal that ${A}$ does not act in every direction in the same way, and some directions are privileged in the sense that the transformation is a simple dilatation in those special directions, i.e. there exists ${\lambda}$ and a non-zero vector ${v}$ (the direction) such that ${Av=\lambda v}$. The values ${\lambda}$ and the corresponding vectors ${v}$ are so important for the matrix ${A}$ that they almost characterize it; hence their names are eigenvalue and eigenvector which means own value and own vector (eigen = own in German). It turns out that ${B}$ and ${C}$ above both have the same eigenvalues ${1,2,3}$, and because they are distinct, both the matrices ${B,C}$ are similar to the diagonal matrix ${\text{diag}(1,2,3)}$ (${X}$ and ${Y}$ are similar if there exists ${P}$ invertible such that ${X=PYP^{-1}}$).