Suppose you have a number *x* between 0 and 1. You want to find a rational approximation for x, but you only want to consider fractions with denominators below a given limit.

For example, suppose *x* = 1/*e* = 0.367879… Rational approximations with powers of 10 in the denominator are trivial to find: 3/10, 36/100, 367/1000, etc. But say you’re willing to have a denominator as large as 10. Could you do better than 3/10? Yes, 3/8 = 0.375 is a better approximation. What about denominators no larger than 100? Then 32/87 = 0.36781… is the best choice, much better than 36/100.

How do you find the best approximations? You could do a brute force search. For example, if the maximum denominator size is *N*, you could try all fractions with denominators less than or equal to *N*. But there’s a much more efficient algorithm. The algorithm is related to the Farey sequence named after John Farey, though I don’t know whether he invented the algorithm.

The idea is to start with two fractions, *a*/*b* = 0/1 and *c*/*d* = 1/1. We update either *a*/*b* or *c*/*d* at each step so that *a*/*b* will be the best lower bound of *x* with denominator no bigger than *b*, and *c*/*d* will be the best upper bound with denominator no bigger than *d*. At each step we do a sort of binary search by introducing the *mediant* of the upper and lower bounds. The mediant of *a*/*b* and *c*/*d* is the fraction (*a*+*c*)/(*b*+*d*) which always lies between *a*/*b* and *c*/*d*.

Here is an implementation of the algorithm in Python. The code takes a number x between 0 and 1 and a maximum denominator size *N*. It returns the numerator and denominator of the best rational approximation to *x* using denominators no larger than *N*.

def farey(x, N): a, b = 0, 1 c, d = 1, 1 while (b <= N and d <= N): mediant = float(a+c)/(b+d) if x == mediant: if b + d <= N: return a+c, b+d elif d > b: return c, d else: return a, b elif x > mediant: a, b = a+c, b+d else: c, d = a+c, b+d if (b > N): return c, d else: return a, b

In Python 3.0, the `float`

statement could be removed since the division operator does floating point division of integers.

Read more about rational approximation in Breastfeeding, the golden ratio, and rational approximation.

Here’s an example of a situation in which you might need rational approximations. Suppose you’re designing an experiment which will randomize subjects between two treatments A and B. You want to randomize in blocks of size no larger than *N* and you want the probability of assigning treatment A to be *p*. You could find the best rational approximation *a*/*b* to *p* with denominator *b* no larger than *N* and use the denominator as the block size. Each block would be a permutation of *a* A’s and *b*–*a* B’s.

**Update**: Eugene Wallingford wrote a blog post about implementing the algorithm in Klein, a very restricted language used for teaching compiler writing.

**Update**: There’s a bug in the code. See discussion below.

There is a small bug in the code. The return in the loop allows returning a denominator larger than N. For example farey(.25,3) returns (1,4).

Thanks. I fixed it.

The fix brings up an interesting point. If the mediant is exactly x, but the mediant’s denominator is too big, you need to either return a/b or c/d. Which one do you choose? You can show that the mediant is closer to which ever fraction has the larger denominator.

Hi John,

First, let me take the chance to let you know I love the blog.

I thought I’d mention that Farey addition (calculating the mediant) and the Farey sequence make a few appearances in the highly enjoyable book Indra’s Pearls. In particular, they have a nice visual showing how a mediant and its two parent fractions are vertices of an ideal triangle in the tessellation generated by the modular group (on the upper half plane). It is nice to picture this algorithm in the context of that geometrical picture. Starting with the triangle edge defined by 0/1 and 1/1, you wind your way along edges of the modular tessellation, moving over successively smaller looking triangles as you close in on the number you want to approximate.

A friend used to use a version of this to mischievous ends: he would find an approximation good to two decimal places, and use that on his checks. $138.25 becomes “$138 1/4.” At some point, he met the poor teller who everyone passed his checks on to, and stopped doing it so much. He still uses it every now and again, but only when he knows it will cause humans outside his bank some consternation (eg: mortgage payments).

If you start with (c, d) = (1, 0), the algorithm works for x > 1, too.

The code only return fractions less than x, but a fraction bigger than x could still be better approximation. Example: farey(0.605551,30) gives 3/5, but 17/28 is better approximation.

The reverse of this technique (in binary) is arithmetic compression. Which turns a given rational number into an interval of decimal numbers. This is then repeated with the next rational being inserted into the next interval. This allows the most general (though least efficient) form of compression.

http://en.wikipedia.org/wiki/Arithmetic_coding

Unfortunately, some work confirmed Ttl’s result and lead me on to discover that the best approximation is found using continued fractions, and I suppose is how this is calculated in Python:

`assert Fraction(17, 28) == Fraction('0.605551').limit_denominator(30)`

– Paddy.

This reminds me of the Fibonacci version of the golden section search algorithm:

search(x, N):

a,(mediant,b) = 0,fibRange(N) // closest Fibonacci numbers such that mediant < N <= b

while(a!=b):

if x==mediant:

return mediant

if x<mediant:

mediant,b = a+b-mediant,mediant

else:

a,mediant = mediant,3*mediant-(a+b)

return mediant

Of course, that algorithm doesn’t give a denominator (except maybe N).

What you described is a binary search on the Stern-Brocot tree, but your code returns the convergents of the simple continued fraction, both of which are related to (but neither quite corresponds to) the best rational approximation.

You never really described how to turn that binary search into a return value. Your code returns part of the carrier value* of that node tree. In the case of the Stern-Brocot tree, this corresponds to the label of the last node that changed the direction of the search, which in turn corresponds to a convergent of the continued fraction.

It should be noted that simply performing a binary search on the Stern-Brocot tree up to a given bound, is not entirely satisfactory because while going deeper in the tree will always get you closer to your goal, eventually, individual steps can result in worse approximations. (Specifically, after the search changes direction, your approximations get worse until you get halfway to the next direction change)

Now, all convergents are best rational approximation (for a suitably small bound), but not all best approximations (for some bound) are convergents. However, a best approximation might inflate the denominator significantly, but improve the approximation only barely, while by contrast the convergents are the best of the best approximations.

*carrier in the F-(co)algebraic sense.

I tried writing the above algorithm in C++ and can’t get it to work. A very good algorithm for approximating rational numbers is in the book Mathematical Software Tools in C++ by Alain Reverchon.

Here is a translation into C++ with some minor updates. You cannot really determine equality between two floating point numbers, and besides, I need accuracy to some epsilon as well as a maximum size for the numerator. Anyway, here is a C++ offering.

Usage is pretty simple.

Apologies for the loss of indenting above…… And thanks to John for the Python version :)

David: I put <pre> tags around your code and that restored the indenting.

Thanks David W.! I’m going to give it a spin.

John,

This is pretty neat algorithm. One other interesting application is during sampling rate conversion (decimation/interpolation) of signals, especially when input and output rates are constrained by, for example, channel bandwidth, hardware/software interface rates etc. Decimation/interpolation by rational factors are much better for hardware implementation and it is even better if we can find the smallest denominator.

Paddy,

Thanks for the pointer to the fractions module in python. Did not know it existed.

Amal

I think there is a small bug in your code. Your code does not always return the best rational approximation. Try it for x= 0.58496250072 and N=253 for example. Your code returns 31/53 but should return 148/253.

I suggest changing the test in the while-loop to “b+d leq N” and testing after the while-loop whether a/b or c/d is the better approximation.

Do you not think this up about 30 years ago?

sorry for dup,

Here is the correct code

Hi John

Thank you for this algorithm, that was really helpful :).

Hi John,

I stumbled across your article, but like some of the commenters the fractions your Python code produced did not match the best case. Simplifying the function appears to give better agreement for the floating point values indicated in the comments.

This version assumes one is using Python 3+