The silver ratio

Most people have heard of the golden ratio, but have you ever heard of the silver ratio? I only heard of it this week. The golden ratio can be expressed by a continued fraction in which all coefficients equal 1.

The silver ratio is the analogous continued fraction with all coefficients equal to 2.

You might think for a moment that the silver ratio should be just twice the golden ratio, but the coefficients contribute to the series in a non-linear way. The silver ratio actually equals 1 + √2. The golden ratio has a simple geometric interpretation. I don’t know of a geometric interpretation of the silver ratio. (Update: See Maxwell’s Demon for geometric applications of the silver ratio.)

A previous post mentioned that the golden ratio and related numbers are the worst case for Hurwitz’s theorem. The silver ratio and its cousins are the second worst case for the theorem.

Related posts:

Breastfeeding, the golden ratio, and rational approximation
Golden ratio and special angles
Connecting Fibonacci and geometric sequences
Fibonacci numbers and numerical integration

Read More

Breastfeeding, the golden ratio, and rational approximation

Gil Kalai’s blog featured a guest post the other day about breastfeeding twins. The post commented in passing that

φ, the golden ratio, is the number hardest to approximate by rationals.

What could this possibly have to do with breastfeeding? The post described a pair of twins with hunger cycles sin(t) and sin(φt), functions that hardly ever come close to synchronizing. The constant φ is difficult to approximate by rational numbers, in a sense that I describe below, and this explains why the two functions are so often out of sync.

graphs of sin(t) and sin(φ t)

In one sense, φ is very easy to approximate by rationals. The ratio of any two consecutive Fibonacci numbers is a rational approximation to φ, the approximation improving as you go further out the Fibonacci sequence. The same is true for any generalized Fibonacci sequence, though the approximation may not be very good until you go out a ways in the sequence, depending on your starting values. So φ is easy to approximate in the sense that it is convenient to think of approximations.

Now let’s look at the sense in which φ is hard to approximate. How accurately can you approximate an irrational number ξ by a rational number a/b? Obviously you can do a better and better job by picking larger and larger fractions. For example, you could always take the first n digits of the decimal expansion of ξ and use that to make a rational approximation with denominator 10n. But that might be very inefficient. How well can you approximate ξ relative to the size of the denominator? This question is answered in general by a theorem of Hurwitz.

Given any irrational number ξ, there exist infinitely many different rational numbers a/b such that

left| xi - frac{a}{b} right| < frac{1}{sqrt{5}, b^2}

The decimal expansion idea mentioned above is wasteful because the error goes down in proportion to the denominator, while Hurwitz theorem says it’s possible for the error to go down in proportion to the square of the denominator.

Can we do any better than Hurwitz theorem? For example, is it possible to replace √5 with some larger constant? Not in general, and the golden ratio φ provides the counterexample to any would-be refinements of Hurwitz theorem. The constant φ is a worst case. That is the sense in which φ is the number hardest to approximate by rationals. If φ and some related numbers are excluded, then the constant √5 can be increased.

Going back to breastfeeding, suppose the twins had hunger cycles sin(t) and sin(ξt). If ξ is irrational, the two curves will never exactly have the same period. But if ξ were equal to a rational number a/b, then the two functions would have a common period of length bπ if a is even or of length 2bπ if a is odd. If ξ were (approximately) equal to a/b with b small, the feedings would often synchronize. Since φ requires large values of b to make a good rational approximation, ξ = φ is a worst case.

Related posts:

Golden ratio and special angles
Connecting Fibonacci and geometric sequences
Fibonacci numbers at work

Read More

Connecting Fibonacci and geometric sequences

Here’s a quick demonstration of a connection between the Fibonacci sequence and geometric sequences.

The famous Fibonacci sequence starts out 1, 1, 2, 3, 5, 8, 13 … The first two terms are both 1, then each subsequent terms is the sum of the two preceding terms.

A generalized Fibonacci sequence can start with any two numbers and then apply the rule that subsequent terms are defined as the sum of their two predecessors. For example, if we start with 3 and 4, we get the sequence 3, 4, 7, 11, 18, 29, …

Let φ be the golden ratio, the positive solution to the equation 1 + x = x2. Let φ’ be the conjugate golden ratio, the negative solution to the same quadratic equation. Then φ = (1 + √ 5)/2 and φ’ = (1 – √ 5)/2.

Now let’s look at a generalized Fibonacci sequence starting with 1 and φ. Then our terms are 1, φ, 1 + φ, 1 + 2φ, 2 + 3φ, 3 + 5φ, … Let’s see whether we can simplify this sequence.

Now 1 + φ = φ2 because of the quadratic equation φ satisfies. That tells us the third term equals φ2. So our series starts out 1, φ, φ2. This is looking like a geometric sequence. Could the fourth term be φ3? In fact, it is. Since the fourth term is the sum of the second and third terms, it equals φ +φ2 = φ(1 + φ) = φ(φ2) = φ3. You can continue this line of reasoning to prove that the generalized Fibonacci sequence starting with 1 and φ is in fact the geometric sequence 1, φ, φ2, φ3, …

Now start a generalized Fibonacci sequence with φ’. Because φ’ is also a solution to 1 + x = x2, it follows that the sequence 1, φ’, 1 + φ’, 1 + 2φ’, 2 + 3φ’, … equals the geometric sequence 1, φ’, (φ’)2, (φ’)3, …

Related posts:

Honey bee genealogy (elementary)
Fibonacci numbers at work (advanced)

Read More

Fibonacci numbers at work

Today I needed to use Fibonacci numbers to solve a problem at work. Fibonacci numbers are great fun, but I don’t recall needing them in an applied problem before.

I needed to compute a series of integrals of the form

f(x, y) = xa (1-x)b yc (1-y)d p(x, y)

over the unit square for a statistical application. The function p(x, y) is a little complicated but its specific form is not important here. If the constants a, b, c, and d are all positive, as they usually are in my application, the integrand can be extended to a continuous periodic function in the plane. Lattice rules are efficient for such integration problems, and the optimal lattice rules for two-variable integration are given by Fibonacci lattice rules.

If Fk is the kth Fibonacci number and {x} is the remainder when subtracting off the greatest integer less than x, the kth Fibonacci lattice rule approximates the integral of f by

Fibonacci lattice integration rule

This rule worked well. I compared the results to those using the two-variable trapezoid rule and the lattice rule always gave more accuracy for the same number of integration points. (Normally the trapezoid rule is not very efficient. However, it can be surprisingly efficient for periodic integrands. See point #2 here. But the Fibonacci lattice rule was even more efficient.)

To implement this integration rule, I first had to write a function to compute Fibonacci numbers. This is such a common academic exercise that it is almost a cliché. I was surprised to have a practical need for such a function. My implementation was

    int Fibonacci(int k)
    {
        double phi = 0.5*(1.0 + sqrt(5.0));
        return int( pow(phi, k)/sqrt(5.0) + 0.5 );
    }

The reason this works is that

equation for Fibonacci numbers

where

definition of phi, golden ratio

is the golden ratio. The second term in the formula for Fk is smaller than 1, and Fk is an integer, so by rounding the first term to the nearest integer we get the exact result. Adding 0.5 before casting the result to an integer causes the result to be rounded to the nearest integer.

If you’re interested in technical details of the numerical integration method, see Lattice Methods for Multiple Integration by I. H. Sloan and S. Joe.

Read More