My previous post looked at continued fractions and rational approximations for *e* and gave a little Python code. I found out later there’s a more direct way to do this in Python using Sage.

At its simplest, the function `continued_fraction`

takes a real number and returns a truncated continued fraction representation. For example, `continued_fraction(e)`

returns

[2, 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10, 1, 1, 12, 1, 1]

Optionally, you can also specify the number of bits of precision in the real argument and the number of terms desired.

By calling the `convergents`

method on the return value of `continued_fraction(e)`

you can find a sequence of rational approximations based on the continued fraction. For example,

print continued_fraction(e).convergents()

produces

[2, 3, 8/3, 11/4, 19/7, 87/32, 106/39, 193/71, 1264/465, 1457/536, 2721/1001, 23225/8544, 25946/9545, 49171/18089, 517656/190435, 566827/208524, 1084483/398959, 13580623/4996032, 14665106/5394991, 28245729/10391023].

To get higher precision output, you need higher precision input. For example, you could pass in

RealField(200)(e)

rather than simply `e`

to tell Sage that you’d like to use the 200-bit representation of *e* rather than the default precision.

If you like continued fractions, I recommend you read the relevant parts of the classic Numerical Methods That Work. The details are probably obsolete but it’s fun reading (at least, if you think that sort of thing is fun to read).

P.S. I just looked up Acton in Wikipedia and was surprised to see he’s still alive. And he wrote a second book (published at the age of 77!). I wonder if it’s any good. It’s sobering to read Numerical Methods That Work: it’s so wonderful and so readable, yet in this modern era there’s really not much reason to read it. Perhaps William Goldman (hey, I checked: he’s still alive too!) or some equivalent could prepare a 50-page “good parts” version that could be still be useful as a basic textbook.

Andrew: Numerical Methods That Work is fantastic. And I still recommend people read it. The specific computing limitations he mentions are now quaint, but the principles are still valid. You’ll still run into the same problems, just at different places.

For example, Acton talks about catastrophic loss of precision with single precisions arithmetic. Now everything is done in double precision, and that’s enough to save you from the specific problem he gives. But with a minor change to the problem you’ll have the same problem in double precision.

I’m always using integration techniques I learned from that book, such as subtracting off a singularity before doing numerical integration then adding back the analytic integral of the singular part.

As for continued fractions, I used to think they were just for recreational math, an amusing typographical novelty. And yet I continually run into them. They often give the best numerical algorithm for evaluating some function.

> I’m always using integration techniques I learned from that book, such as subtracting off a singularity before doing numerical integration then adding back the analytic integral of the singular part.

I’m wondering, is there a connection between this and the so-called control variates (this approach sounds very similar, I have a vague recollection this analogy might have been mentioned in the “Numerical Recipes” book as smoothing out the integrand graph, but I’m not sure if there are any formal equivalence theorems or the like), http://en.wikipedia.org/wiki/Control_variates#Example

A version of this algorithm is implemented in R’s “MASS” package, as the “fractions()” (some of the guts are in “MASS:::.rat”).

For example, fractions(exp(1)-2) gives 719/1001, which agrees with the 2721/1001 = 2002 + 719/1001 answer above. Higher values of the “cycle” argument work too.

Sorry to join this party late, but I found this post today via Andrew Gelman’s blog.

I’ve never read Numerical Methods That Work, but it sounds like something I should. I’ll put it on my (ever growing) Goodreads “to read” list.

Subtracting out singularities before numerical integration has a long history outside of computing. I learned the method in Complex Analysis class*. We found the singularities or poles and drew circles in complex space about each. We then integrated the well-behaved portions piece-wise.

Lastly, we computed the areas in the circles in the complex plane as the limit of their radii approached zero and added those areas to the piece-wise sum.

Really so elegant.

* I took the CA class/seminar with Stephen Smale. He seemed a bit distant and distracted and I thought that was just his personality. I had no idea he was embroiled in this mess at the same time. (Scroll to the link at the bottom.)

http://badmomgoodmom.blogspot.com/2007/08/gaia-girl-named-me-rockin-girl-blogger.html