Approximation by prime powers

The well-known Weierstrass approximation theorem says that polynomials are dense in C [0, 1]. That is, given any continuous function f on the unit interval, and any ε > 0, you can find a polynomial P such that f and P are never more than ε apart.

This means that linear combinations of the polynomials

1, x, x², x³, …

are dense in C [0, 1].

Do you need all these powers of x? Could you approximate any continuous function arbitrarily well if you left out one of these powers, say x7? Yes, you could.

You cannot omit the constant polynomial 1, but you can leave out any other power of x. In fact, you can leave out a lot of powers of x, as long as the sequence of exponents doesn’t thin out too quickly.

Müntz approximation theorem

Herman Müntz proved in 1914 that a necessary and sufficient pair of conditions on the exponents of x is that the first exponent is 0 and that the sum of the reciprocals of the exponents diverges.

In other words, the sequence of powers of x

xλ0, xλ1, xλ2, …

with

λ0 < λ1 < λ2

is dense in C [0, 1] if and only if λ0 = 0 and

1/λ1 + 1/λ2 + 1/λ3 + … = ∞

Prime power example

Euler proved in 1737 that the sum of the reciprocals of the primes diverges, so the sequence

1, x2, x3, x5, x7, x11, …

is dense in C [0, 1]. We can find a polynomial as close as we like to any particular continuous function if we combine enough prime powers.

Let’s see how well we can approximate |x − ½| using prime exponents up to 11.

The polynomial above is

0.4605 − 5.233 x2 + 7.211 x3 + 0.9295 x5 − 4.4646 x7 + 1.614 x11.

This polynomial is not the best possible uniform approximation: it’s the least squares approximation. That is, it minimizes the 2-norm and not the ∞-norm. That’s because it’s convenient to do a least squares fit in Python using scipy.optimize.curve_fit.

Incidentally, the Müntz approximation theorem holds for the 2-norm as well.

Related posts

Logarithm approximation curiosity

I’ve written before about three simple approximations for logarithms, for base 10

log10(x) ≈ (x – 1)/(x + 1)

base e,

loge(x) ≈ 2(x – 1)/(x + 1)

and base 2

log2(x) ≈ 3(x – 1)/(x + 1).

These can be used to mentally approximate logarithms to moderate accuracy, accurate enough for quick estimates.

Here’s what’s curious about the approximations: the proportionality constants are apparently wrong, and yet the approximations are each fairly accurate.

It is not the case that

loge(x) = 2 log10(x).

In fact,

loge(x) = loge(10) log10(x) = 2.3 log10(x)

and so it seems that the approximation for natural logarithms should be off by 15%. But it’s not. The error is less than 2.5%.

Similarly,

log2(x) = log2(10) log10(x) = 3.32 log10(x)

and so the approximation for logarithms base 2 should be off by about 10%. But it’s not. The error here is also less than 2.5%.

What’s going on?

First of all, the approximation errors are nonlinear functions of x and the three approximation errors are not proportional. Second, the approximation for logb(x) is only good for 1/√bx ≤ √b. You can always reduce the problem of calculating logb(x) to the problem of calculating the log in the range 1/√bx ≤ √b and so this isn’t a problem.

Here’s a plot of the three error functions.

This plot makes it appear that the approximation error is much worse for natural logs and logs base 2 than for logs base 10. And it would be if we ignored the range of each approximation. Here’s another plot of the approximation errors, plotting each over only its valid range.

When restricted to their valid ranges, the approximations for logarithms base e and base 2 are more accurate than the approximation for logarithms base 10. Both errors are small, but in opposite directions.

Here’s a look at the relative approximation errors.

We can see that the relative errors for the log 2 and log e errors are less than 2.5%, while the relative error for log 10 can be up to 15%.

 

How big will a carpet be when you roll or unroll it?

If you know the dimensions of a carpet, what will the dimensions be when you roll it up into a cylinder?

If you know the dimensions of a rolled-up carpet, what will the dimensions be when you unroll it?

This post answers both questions.

Flexible carpet: solid cylinder

The edge of a rolled-up carpet can be described as an Archimedean spiral. In polar coordinates, this spiral has the equation

r = hθ / 2π

where h is the thickness of the carpet. The previous post gave an exact formula for the length L of the spiral, given the maximum value of θ which we denoted T. It also gave a simple approximation that is quite accurate when T is large, namely

L = hT² / 4π

If r1 is the radius of the carpet as a rolled up cylinder, r1 = hT / 2π and so T = 2π r1 / h. So when we unroll the carpet

L = hT² / 4π = πr1² / h.

Now suppose we know the length L and want to find the radius r when we roll up the carpet.

T = √(hL/π).

Stiff carpet: hollow cylinder

The discussion so far has assumed that the spiral starts from the origin, i.e. the carpet is rolled up so tightly that there’s no hole in the middle. This may be a reasonable assumption for a very flexible carpet. But if the carpet is stiff, the rolled up carpet will not be a solid cylinder but a cylinder with a cylindrical hole in the middle.

In the case of a hollow cylinder, there is an inner radius r0 and an outer radius r1. This means θ runs from T0 = 2π r0/h to T1 = 2πr1/h.

To find the length of the spiral running from T to T1 we find the length of a spiral that runs from 0 to T1 and subtract the length of a spiral from 0 to T0

L = πr1² / h − πr0² / h = π(r1² − r0²)/h.

This approximation is even better because the approximation is least accurate for small T, and we’ve subtracted off that part.

Now let’s go the other way around and find the outer radius r1 when we know the length L. We also need to know the inner radius r0. So suppose we are wrapping the carpet around a cardboard tube of radius r0. Then

r1 = √(r0² + hL/π).

How much will a cable sag? A simple approximation

Suppose you have a cable of length 2s suspended from two poles of equal height a distance 2x apart. Assuming the cable hangs in the shape of a catenary, how much does it sag in the middle?

If the cable were pulled perfectly taut, we would have s = x and there would be no sag. But in general, s > x and the cable is somewhat lower in the middle than on the two ends. The sag is the difference between the height at the end points and the height in the middle.

The equation of a catenary is

y = a cosh(t/a)

and the length of the catenary between t = −x and t = x is

2s = 2a sinh(x/a).

You could solve this equation for a and the sag will be

g = a cosh(x/a) − a.

Solving for a given s is not an insurmountable problem, but there is a simple approximation for g that has an error of less than 1%. This approximation, given in [1], is

g² = (s − x)(sx/2).

To visualize the error, I set x = 1 and let a vary to produce the plot below.

The relative error is always less than 1/117. It peaks somewhere around x = 1/3 and decreases from there, the error getting smaller as a increases (i.e. the cable is pulled tighter).

Related posts

[1] J. S. Frame. An approximation for the dip of a catenary. Pi Mu Epsilon Journal, Vol. 1, No. 2 (April 1950), pp. 44–46

Binary to text to binary

Gnu Privacy Guard includes a way to encode binary files as plain ASCII text files, and turn these text files back into binary. This is intended as a way to transmit encrypted data, but it can be used to convert any kind of file from binary to text and back to binary.

To illustrate this, I’ll use Albrecht Dürer’s Melencolia I as an example. (More on this image and its mathematical significance here.)

Albrecht Dürer’s engraving Melencolia I

This image is saved in the file Melancholia.jpg.

Binary to text

If we run

   gpg --enarmor Melencolia.jpg

at a command line, it produces a file Melancholia.jpg.asc, the asc suffix indicating an ASCII file.

We can look inside this file if we’d like.

-----BEGIN PGP ARMORED FILE-----
Comment: Use "gpg --dearmor" for unpacking

/9j/4AAQSkZJRgABAQEBLAEsAAD/4QBoRXhpZgAATU0AKgAAAAgABAEaAAUAAAAB
AAAAPgEbAAUAAAABAAAARgEoAAMAAAABAAIAAAExAAIAAAARAAAATgAAAAAAAAEs
AAAAAQAAASwAAAABUGFpbnQuTkVUIHYzLjUuOAAA/9sAQwACAQECAQECAgICAgIC
...
wJ/wkzRf8IN4LCCVVwNCtM8sDnPl5zkk565NbcH7Lnw01K0gtrn4feCriB4QfLl0
S2dV+bdgApwMknAwMk+tFFcktjqif//Z
=rpd1
-----END PGP ARMORED FILE-----

The cryptic text /9j/4A…//Z is a base 64 encoded representation of the binary file. Think of the file as one big binary number. Write that number in base 64, i.e. partition the bits into groups of six. Then represent the base 64 “digits” as alphanumeric characters, plus the symbols + and /. More on the details here.

The line =rpd1 is a 24-bit CRC checksum. The equal sign is a separator, and rpd1 is a base 64 encoding the checksum.

The JPG file is 91,272 bytes and the ASCII file is 123,712 bytes. The ASCII file is about 1/3 larger because every six bits in the binary file corresponds to an eight-bit ASCII character. The ASCII file is a little bit more than 1/3 larger because of the human-friendly text above and below the base 64 encoding, the newline characters, and the checksum.

Text to binary

If we run

    gpg --dearmor Melencolia.jpg.asc

at a command line, it produces a file Melancholia.jpg.asc.gpg. This file is bit-for-bit exactly the same as the original file, which we could confirm by running

    diff Melencolia.jpg Melencolia.jpg.asc.gpg

Related posts

A curious pattern in January exponential sums

The exponential sum page on this site draws a new image every day based on plugging the month, day, and year into a formula. Some of these images are visually appealing; I’ve had many people ask if they could use the images in publications or on coffee mugs etc.

The images generally look very different from one day to the next. One reason I include the date numbers in the order I do, using the American convention, is that this increases the variety from one day to the next.

If you first became aware of the page on New Year’s Day this year, you might think the page is broken because there was no apparent change between January 1 and January 2. Yesterday’s image was different, but then today, January 4, the image looks just like the images for January 1st and 2nd. They all look like the image below.

The plots produced on each day are distinct, but they are geometrically congruent.

The exponential sum page displays a plot connecting the partial sums of a certain series given here. The axes are turned off and so only the shape of the plot is displayed. If one plot is a translation or dilation of another, the images shown on the page will be the same.

Here’s a plot of the images for January 1, 2, and 4, plotted red, green, and blue respectively.

This shows that the images are not the same, but are apparently translations of each other.

There’s another difference between the images. Connecting consecutive partial sums draws an image clockwise on January 1, but counterclockwise on January 2 and 4. You can see this by clicking on the “animate” link on each page.

 

Square root factorial

What factorial is closest to the square root of 2024 factorial?

A good guess would be 1012, based on the idea that √(n!) might be near (n/2)!.

This isn’t correct—the actual answer is 1112—but it’s not wildly off.

Could it be that (2n)! is asymptotically (n!)²?

No, Gauss’ duplication formula

\Gamma(2z) = \frac{1}{\sqrt{2\pi}} 2^{2z - 1/2} \Gamma(z) \Gamma\left(z + \frac{1}{2}\right)

shows that the ratio of (2n)! to (n!)² grows exponentially as a function of n.

However, the ratio only grows exponentially, and factorials grow faster than exponentially. I believe that the value of m minimizing

| √(n!) – m! |

asymptotically approaches n/2. In other words, the inverse factorial of  √(n!) approaches n/2. And more generally the inverse factorial of (n!)1/k asymptotically approaches n/k. I haven’t written out a proof, but the plot below shows numerical evidence.

So √(n!) is not asymptotically (n/2)!, but the inverse factorial of √(n!) is asymptotically n/2.

See the next post for a way to compute inverse factorials.

222nd Carnival of Mathematics

A blog carnival is a round up of recent blog posts. The Carnival of Mathematics is a long-running carnival of blog posts on mathematical topics. This is the 222nd edition of the carnival.

Facts about 222

By longstanding tradition, the nth Carnival of Mathematics must begin with trivia about the number n, and so here are five facts about the number 222.

  • There are six groups of order 222, and 222 groups of order 912.
  • 222=9+87+6×5×4+3+2+1.
  • You can encode the number 222 in the Major mnemonic system as “unknown” or “one-on-one.”

The posts

Gil Kalai looks at what happens when you ask ChatGPT to solve Elchanan Mossel’s dice problem.

Γιώργος Πλούσος (@plousos2505 on the social media platform formerly know as Twitter) posted an image showing how to compute π via a sequence of approximations requiring only basic geometry.

David Eppstein’s blog 11011110 has a post on Pyramidology. It’s particularly fitting to have a post from 1101110 in this carnival. As the author explains in his About page,

This journal’s name comes from interpreting my initials as the hexadecimal number 0xDE (decimal 222) and then converting the result to binary.

The blog ThatsMaths has a post on Sharkovsky numbering, Oleksandr Sharkovsky (1936–2022), and Sharkovsky’s Theorem. This post includes the beautiful image from Wikimedia.

The post Patterns of reality by Timothy Williamson gives an overview of logic from basics to the work of Gödel and Turing.

Larissa Fedunik-Hofman posts an interview with Andrew Krause discussing dynamical system.

Finally, we have Matthew Scroggs’ Advent calendar of math puzzles.

To see previous editions of the carnival, or to submit a post to the next edition, see the Carnival of Mathematics page on Aperiodical.

Wrapping Christmas lights around a tree trunk

Suppose you want to wrap Christmas lights around a tree trunk that we can approximate by a cylinder of radius r.

You want to wrap lights around the tree in a helix, going up a distance h every time you go around the tree once. What length of lights do you need to make n turns around the tree?

You can model the lights as a parametric curve with equation

(r cos t, r sin t, ht/2π)

If t ranges from 0 to T, the corresponding curve length is

(r² + h²/4π²)1/2 T

and you can set T =2πn if you want to find the length of n turns.

How much does h matter? Not much if h is less than r, as is often the case when wrapping a tree trunk with Christmas lights. My daughter Allison discovered this while wrapping lights around our pine tree, and then I wrote this post adding the math details.

If we expand (r² + h²/4π²)1/2 as a function of h in a Taylor series centered at 0 we get

(r² + h²/4π²)1/2  = r + / 8π²r + O(h4).

For example, suppose a tree is r = 10 inches in diameter and move h = 4 inches vertically with each turn. To complete one turn we let T = 2π. The exact length of one turn is

(r² + h²/4π²)1/2  T = (10² + 4²/4π²)1/2 (2π) = 62.96 inches.

If we ignore the h term we get

rT = 10 (2π) = 62.83 inches.

In short, the length of n turns around the tree is 2πrn, the same as 10 circles around the tree. The difference in length between a helix with n turns and n circles is negligible, provided h is smaller than r. Even if h = r = 10 in the example above, we’d get a length of 63.6 inches and our approximation would still be off by less than an inch per turn.

The heart of this calculation pops up frequently in various contexts. For example, the same calculation appears in the post It doesn’t matter much if the tape is straight.

F# and G

I was looking at frequencies of pitches and saw something I hadn’t noticed before: F# and G have very nearly integer frequencies.

To back up a bit, we’re assuming the A above middle C has frequency 440 Hz. This is the most common convention now, but conventions have varied over time and place.

We’re assuming 12-tone equal temperament (12-TET), and so each semitone is a ratio of 21/12. So the nth note in the chromatic scale from A below middle C to A above middle C has frequency

220 × 2n/12.

I expected the pitch with frequency closest to integer would be an E because a perfect fifth above 220 Hz would be exactly 330 Hz. In equal temperament the frequency of the E above middle C is 329.6 Hz.

The frequency of F# is

220 × 29/12 Hz = 369.9944 Hz.

The difference between this frequency and 370 Hz is much less than the difference between equal temperament and other tuning systems.

The frequency of G is

220 × 25/6 Hz = 391.9954 Hz

which is even closer to being an integer.

In more mathematical terms, stripped of musical significance, we’ve discovered that

23/4 ≈ 37/22

and

25/6 ≈ 98/55.