Iterative algorithms – Convergence rates
In optimization and any other iterative numerical algorithms, we are interested in having convergence estimates for all algorithms. We are not only interested in showing that the error goes to , although that is essential, but also at what speed the error goes to . An algorithm which converges quicker is a better choice in all cases!
Generally, there are two points of view for convergence: convergence in terms of where is the number of iterations or having a comparison between the error at iteration and the error at the iteration . I will take the latter point of view below.
To fix the ideas, denote the error estimate for , where is the current iterate and is the solution to the problem.
We have the following standard classification:
- linear convergence: there exists such that
the constant is called the convergence ratio it is easy to show that , so in particular . - sublinear convergence: but is not linearly converging
- superlinear convergence: with any positive convergence ratio
sufficient condition: - {convergence of order }: there exists such that for large enough
is called the order of convergence
the case has a special name: quadratic convergence
To underline the significant difference between linear and superlinear convergence consider the following examples: Let . Then:
- converges linearly to zero, but not superlinearly
- converges superlinearly to zero, but not quadratically
- converges to zero quadratically
Quadratic convergence is much faster than linear convergence.
Among optimization algorithm, the simpler ones are usually linearly convergent (bracketing algorithms: trisection, Golden search, Bisection algorithm, gradient descent). Algorithms involving higher order information or approximation are generally superlinear (Secant method, Newton, BFGS or LBFGS in higher dimensions etc.).
There is a huge difference between linear convergence and super-linear convergence. If a faster algorithm is available using it is surely useful!
Sum of floors of multiples of the Golden Ratio
Propose an algorithm for computing
for , where is the golden ratio. The sum is the sum of the first terms in the so called lower Wythoff sequence.
Solution: Computing a floor function and a multiplication is not complicated, therefore proposing a algorithm for computing is trivial. However, such an algorithm is not viable for .
The path to a much more efficient algorithm goes through the notion of Beatty sequence. For a positive irrational number the associated Beatty sequence is . This notion becomes interesting when noting the following fact. If is defined by then the Beatty sequences and cover all the positive integers. The latter sequence is called the associated complementary Beatty sequence. Two proofs of this fact can be found in the Wikipedia link above.
It is obvious now that is just the partial sum of the Beatty sequence associated to . Now let’s see what is the complementary sequence. A brief computation shows that the associated is given by , which shows that the terms which are not of the form are not far from it. In order to see how this can give an efficient algorithm, let’s follow the instructions below:
- denote by , the largest term in .
- looking at all the integers up to , in view of the result regarding the Beatty sequences, they are either of the form or of the form (in the associated complementary sequence).
- denote by the largest integer such that , which is given by . Then, it is not difficult to see that is the difference between the sum of all positive integers up to and .
- In the end we obtain
- by convention, state that or .
The last item above shows that a recursive algorithm can be implemented in order to compute . Moreover, since the computation of is reduced to the computation of where , the algorithm will converge very fast, since the sequence of upper bounds for converges exponentially to zero. Therefore, the algorithm obtained will have complexity and will work for extremely large values of , provided the language you are using can handle the expected precision.
The algorithm for computing shown above can be used, for example, in computing sums related to elements in the Wythoff table, which can be expressed using Fibonacci numbers and terms in .
Other values of , like for example (with ) lead to other types of sums for which the same type of algorithm can be applied. It is likely that even cases where is not explicitly related to through an integer may be handled through a similar procedure (to be confirmed).
Compute recurrent sequences fast
In many programming questions the computation of a recurrent sequence might be necessary. One of the famous such sequences is the Fibonacci sequence defined by and
Writing a code which computes for given is a no-brainer, but the complexity may vary much depending on the implementation. As it turns out, not everyone has the right reflexes when writing such a code. In the following I write some algorithmic variants and discuss their speed and complexity. In the end I’ll say a few words on how to compute quickly the -th sequence of any recurrence relation with fixed coefficients and fixed order.
First, remember that it is possible to have an exact formula:
where are solutions of and and are found from and . Note that are irrational, so when looking for an exact formula you might need to be careful for large . This could give you a algorithm. The formula is explicit, but you need to compute two exponentials so it’s not really free. Moreover, this shows that values of grow exponentially fast so you’ll quickly overflow if you’re not working in arbitrary precision. Moreover, if you need to compute for really large , this might not be the simplest approach, since you’re working with irrationals.
1. Recursion. The simplest way to code the computation of is using a recursive function. Create a function fibonacci which depending on the value of returns if or returns fibonacci()+fibonacci(). I remember using coding this when I was a beginner, thinking it was a nice idea… It turns out, however, that this approach has exponential complexity. If you’re not convinced, implement this and try to compute , etc. You’ll quickly see the exponential cost we’re talking about.
In order to see why, it is enough to note that when computing the algorithms needs to compute for all . Then for each one also needs to compute for all , etc. The only way this can be efficient if one stores the values already computed. This is called recursion with memoization, and should always be used when coding recursive tasks when there is a risk of computing the same value multiple times. However, the memoization has an memory cost.
2. Iterative sequence. The second approach is more efficient. You only need two variables, and which store and . Then at each iteration, store in a temporary variable, change into and store the value of in . It is obvious that this approach costs in execution time and in memory. If you need to store all values of , you will also have memory cost. As always, when doing this you can go up to in reasonable computing time (even higher, depending on your hardware, or programming language). However, reaching is generally out of the question.
3. Matrix exponentiation. One interesing approach, which is really fast is rewriting the second order recurrence as a first order matrix recurrence:
This implies that where . Therefore, in order to compute only a matrix exponential is needed. Using exponentiation by squaring this can be done in time. At the end you will obtain an (exact) integer result or a result modulo whatever integer you want. So this is the way to go if you need to compute for extremely large .
Please note that you can generalize this to recurrences of the form
It suffices to see that
so in the end you just need to compute the exponential of a bigger matrix. The complexity stays the same!
What to remember from this post?
- Don’t use recursion (without memoization) when computing recurrent sequences. You won’t get very far and you’ll waste your ressources!
- You can compute the -th value in recurrent sequences with constant coefficients with a complexity using matrix exponentiation. This works well even for huge when working modulo a fixed integer .
Putnam 2018 – Problem B4
B4. Given a real number , we define a sequence by , and for . Prove that if for some , then the sequence is periodic.
Putnam 2018, Problem B4
Solution: The path to the solution for this problem is really interesting. The first question that pops in mind when seeing the problem is: How do I prove that a sequence is periodic?. It’s not obvious from the given recurrence relation that this should happen, and what’s with the condition ?
SEEMOUS 2014
Problem 1. Let be a nonzero natural number and be a function such that . Let be distinct real numbers. If
prove that is not continuous.
Problem 2. Consider the sequence given by
Prove that the sequence is convergent and find its limit.
Problem 3. Let and such that , where and is the conjugate matrix of .
(a) Show that .
(b) Show that if then .
Problem 4. a) Prove that .
b) Find the limit
Agregation 2013 – Analysis – Part 3
Part III: Muntz spaces and the Clarkson-Edros Theorem
Recall that for every we define and that is a strictly increasing sequence of positive integers.
1. Suppose that and . Let . Define and by recurrence for define by
(a) Calculate and prove that
(b) Prove that for every is a linear combination of .
(c) Prove that for every we have .
(d) Deduce that .
(e) Conclude that .
From here on suppose that is arbitrary and the series converges. For denote . For denote the cardinal of the set .
Nice characterization of convergence
Suppose is a topological space, and consider the sequence with the following property:
- every subsequence has a subsequence converging to .
Then .
Increasing sequence of real numbers
Define the sequence in the following way: is a positive integer and
Prove that from a certain moment, the sequence is increasing.
Interesting sequence
Consider an infinite sequence such that for any positive integers. Prove that if for all , then .
IMO Shortlist 2002
Property of a Sequence
We have bounded a sequence of real numbers such that . Prove that the set of accumulation points of such a sequence is a closed interval.
Read more…