Crinkle Crankle Calculus

A crinkle crankle wall

A crinkle crankle wall, also called a serpentine wall, is a wavy wall that may seem to sacrifice some efficiency for aesthetics. The curves add visual interest, but they use more material than a straight way. Except they don’t! They use more bricks than a straight wall of the same thickness but they don’t have to be as thick.

Crinkle crankle walls resist horizontal forces, like wind, more than straight wall would. So if the alternative to a crinkle crankle wall one-brick thick is a straight wall two or more bricks thick, the former saves material. How much material?

The amount of material used in the wall is proportional to the product of its length and thickness. Suppose the wall is shaped like a sine wave and consider a section of wall 2π long. If the wall is in the shape of a sin(θ), then we need to find the arc length of this curve. This works out to the following integral.

\int_0^{2\pi} \sqrt{1 + a^2 \cos^2(x)}\, dx

The parameter a is the amplitude of the sine wave. If a = 0, we have a flat wave, i.e. a straight wall, as so the length of this segment is 2π = 6.2832. If a = 1, the integral is 7.6404. So a section of wall is 22% longer, but uses 50% less material per unit length as a wall two bricks thick.

The integral above cannot be computed in closed form in terms of elementary functions, so this would make a good homework exercise for a class covering numerical integration.

The integral can be computed in terms of special functions. It equals 4 E(-a²) where E is the “complete elliptic integral of the second kind.” This function is implemented as EllipticE in Mathematica and as scipy.special.ellipe in Python.

As the amplitude a increases, the arc length of a section of wall increases. You could solve for the value of a to give you whatever arc length you like. For example, if a = 1.4422 then the length is twice that of a straight line. So a crinkle crankle wall with amplitude 1.4422 uses about as many bricks as a straight wall twice as thick.

***

Photo “Crinkle crankle wall, Fulbourn” by Bob Jones is licensed under CC BY-SA 2.0

Inverse trig function implementations

Programming languages differ in the names they use for inverse trig functions, and in some cases differ in their return values for the same inputs.

Choice of range

If sin(θ) = x, then θ is the inverse sine of x, right? Maybe. Depends on how you define the inverse sine. If -1 ≤ x ≤ 1, then there are infinitely many θ’s whose sine is x. So you have to make a choice.

The conventional choice is for inverse sine to return a value of θ in the closed interval

[-π/2, π/2].

Inverse tangent uses the same choice. However, the end points aren’t possible outputs, so inverse tangent typically returns a value in the open interval

(-π/2, π/2).

For inverse cosine, the usual choice is to return values in the closed interval

[0, π].

Naming conventions

Aside from how to define inverse trig functions, there’s the matter of what to call them. The inverse of tangent, for example, can be written tan-1, arctan, or atan. You might even see arctg or atn or some other variation.

Most programming languages I’m aware of use atan etc. Python uses atan, but SciPy uses arctan. Mathematica uses ArcTan.

Software implementations typically follow the conventions above, and will return the same value for the same inputs, as long as inputs and outputs are real numbers.

Arctangent with two arguments

Many programming languages have a function atan2 that takes two arguments, y and x, and returns an angle whose tangent is y/x. Note that the y argument to atan2 comes first: top-down order, numerator then denominator, not alphabetical order.

However, unlike the one-argument atan function, the return value may not be in the interval (-π/2, π/2). Instead, atan2 returns an angle in the same quadrant as x and y. For example,

    atan2(-1, 1)

returns -π/4; because the point (1, -1) is in the 4th quadrant, it returns an angle in the 4th quadrant. But

    atan2(1, -1)

returns 3π/4. Here the point (-1, 1) and the angle 3π/4 are in the 2nd quadrant.

In Common Lisp, there’s no function named atan2; you just call atan with two arguments. Mathematica is similar: you call ArcTan with two arguments. However, Mathematica uses the opposite convention and takes x as its first argument and y as its second.

Complex values

Some software libraries support complex numbers and some do not. And among those that do support complex numbers, the return values may differ. This is because, as above, you have choices to make when extending these functions to complex inputs and outputs.

For example, in Python,

    math.asin(2)

returns a domain error and

    scipy.arcsin(2)

returns 1.5707964 + 1.3169578i.

In Common Lisp,

    (asin 2)

returns 1.5707964 – 1.3169578i. Both SciPy and Common Lisp return complex numbers whose sine equals 2, but they follow different conventions for what number to chose. That is, both

    scipy.sin(1.5707964 - 1.3169578j)
    scipy.sin(1.5707964 + 1.3169578j)

in Python and

    (sin #C(1.5707964 -1.3169578))
    (sin #C(1.5707964 +1.3169578))

in Common Lisp both return 2, aside from rounding error.

In C99, the function casin (complex arc sine) from <complex.h>

    double complex z = casin(2);

returns 1.5707964 + 1.3169578i. The Mathematica call

    ArcSin[2.]

returns 1.5707964 – 1.3169578i.

Branch cuts

The Common Lisp standardization committee did a very careful job of defining math functions for complex arguments. I’ve written before about this here. You don’t need to know anything about Lisp to read that post.

The committee ultimately decided to first rigorously define the two-argument arctangent function and bootstrap everything else from there. The post above explains how.

Other programming language have made different choices and so may produce different values, as demonstrated above. I mention the Common Lisp convention because they did a great job of considering every detail, such as how to handle +0 and -0 in IEEE floating point.

arctan( k tan(x) )

I recently had to work with the function

f(x; k) = arctan( k tan(x) )

The semicolon between x and k is meant to imply that we’re thinking of x as the argument and k as a parameter.

This function is an example of the coming full circle theme I’ve written about several times. Here’s how a novice, a journeyman, and an expert might approach studying our function.

  • Novice: arctan( k tan(x) ) = kx.
  • Journeyman: You can’t do that!
  • Expert: arctan( k tan(x) ) ≈ kx for small x.

Novices often move symbols around without thinking about their meaning, and so someone might pull the k outside (why not?) and notice that arctan( tan(x) ) = x.

Someone with a budding mathematical conscience might conclude that since our function is nonlinear in x and in k that there’s not much that can be said without more work.

Someone with more experience might see that both tan(x) and arctan(x) have the form x + O(x³) and so

arctan( k tan(x) ) ≈ kx

should be a very good approximation for small x.

Here’s a plot of our function for varying values of k.

Plot of atan( k tan(x) ) for varying k

Each is fairly flat in the middle, with slope equal to its value of k.

As k increases, f(x; k) becomes more and more like a step function, equal to -π/2 for negative x and π/2 for positive x, i.e.

arctan( k tan(x) ) ≈ sgn(x) π/2

for large k. Here again we might have discussion like above.

  • Novice: Set k = ±∞. Then ±∞ tan(x) = ±∞ and arctan(±∞) = ±π/2.
  • Journeyman: You can’t do that!
  • Expert: Well, you can if you interpret everything in terms of limits.

Related posts

The hardest logarithm to compute

Suppose you want to compute the natural logarithms of every floating point number, correctly truncated to a floating point result. Here by floating point number we mean an IEEE standard 64-bit float, what C calls a double. Which logarithm is hardest to compute?

We’ll get to the hardest logarithm shortly, but we’ll first start with a warm up problem. Suppose you needed to compute the first digit of a number z. You know that z equals 2 + ε where ε is either 10-100 or -10-100. If ε is positive, you should return 2. If ε is negative, you should return 1. You know z to very high precision a priori, but to return the first digit correctly, you need to compute z to 100 decimal places.

In this post we’re truncating floating point numbers, i.e. rounding down, but similar considerations apply when rounding to nearest. For example, if you wanted to compute 0.5 + ε rounded to the nearest integer.

In general, it may not be possible to know how accurately you need to compute a value in order to round it correctly. William Kahan called this the table maker’s dilemma.

Worst case for logarithm

Lefèvre and Muller [1] found that the floating point number whose logarithm is hardest to compute is

x = 1.0110001010101000100001100001001101100010100110110110 × 2678

where the fractional part is written in binary. The log of x, again written in binary, is

111010110.0100011110011110101110100111110010010111000100000000000000000000000000000000000000000000000000000000000000000111 …

The first 53 significant bits, all the bits a floating point number can represent [2], are followed by 65 zeros. We have to compute the log with a precision of more than 118 bits in order to know whether the last bit of the fraction part of the float representation should be a 0 or a 1. Usually you don’t need nearly that much precision, but this is the worst case.

Mathematica code

Here is Mathematica code to verify the result above. I don’t know how to give Mathematica floating point numbers in binary, but I know how to give it integers, so I’ll multiply the fraction part by 252 and take 52 from the exponent.

n = FromDigits[
  "10110001010101000100001100001001101100010100110110110", 2]
x = n  2^(678 - 52)
BaseForm[N[Log[x], 40], 2]

This computes the log of x to 40 significant decimal figures, between 132 and 133 bits, and gives the result above.

Related posts

A few days ago I wrote about another example by Jean-Michel Muller: A floating point oddity.

William Kahan also came up recently in my post on summing an array of floating point numbers.

***

[1] Vincent Lefèvre, Jean-Michel Muller. Worst Cases for Correct Rounding of the Elementary Functions in Double Precision. November 2000. Research Report 2000-35. Available here.

[2] There are 52 bits in the the faction of an IEEE double. The first bit of a fraction is 1, unless a number is subnormal, so the first bit is left implicit, squeezing out one extra bit of precision. That is, storing 52 bits gives us 53 bits of precision.

Summing an array of floating point numbers

Adding up an array of numbers is simple: just loop over the array, adding each element to an accumulator.

Usually that’s good enough, but sometimes you need more precision, either because your application need a high-precision answer, or because your problem is poorly conditioned and you need a moderate-precision answer [1].

There are many algorithms for summing an array, which may be surprising if you haven’t thought about this before. This post will look at Kahan’s algorithm, one of the first non-obvious algorithms for summing a list of floating point numbers.

We will sum a list of numbers a couple ways using C’s float type, a 32-bit integer. Some might object that this is artificial. Does anyone use floats any more? Didn’t moving from float to double get rid of all our precision problems? No and no.

In fact, the use of 32-bit float types is on the rise. Moving data in an out of memory is now the bottleneck, not arithmetic operations per se, and float lets you pack more data into a giving volume of memory. Not only that, there is growing interest in floating point types even smaller than 32-bit floats. See the following posts for examples:

And while sometimes you need less precision than 32 bits, sometimes you need more precision than 64 bits. See, for example, this post.

Now for some code. Here is the obvious way to sum an array in C:

    float naive(float x[], int N) {
        float s = 0.0;
        for (int i = 0; i < N; i++) {
            s += x[i];
        }
        return s;
    }

Kahan’s algorithm is not much more complicated, but it looks a little mysterious and provides more accuracy.

    float kahan(float x[], int N) {
        float s = x[0];
        float c = 0.0;
        for (int i = 1; i < N; i++) {
            float y = x[i] - c;
            float t = s + y;
            c = (t - s) - y;
            s = t;
        }
        return s;
    }

Now to compare the accuracy of these two methods, we’ll use a third method that uses a 64-bit accumulator. We will assume its value is accurate to at least 32 bits and use it as our exact result.

    double dsum(float x[], int N) {
        double s = 0.0;
        for (int i = 0; i < N; i++) {
            s += x[i];
        }
        return s;
    }

For our test problem, we’ll use the reciprocals of the first 100,000 positive integers rounded to 32 bits as our input. This problem was taken from [2].

    #include <stdio.h&gt:
    int main() {
        int N = 100000;
        float x[N];
        for (int i = 1; i <= N; i++)
            x[i-1] = 1.0/i;
  
        printf("%.15g\n", naive(x, N));
        printf("%.15g\n", kahan(x, N));  
        printf("%.15g\n", dsum(x, N));
    }

This produces the following output.

    12.0908508300781
    12.0901460647583
    12.0901461953972

The naive method is correct to 6 significant figures while Kahan’s method is correct to 9 significant figures. The error in the former is 5394 times larger. In this example, Kahan’s method is as accurate as possible using 32-bit numbers.

Note that in our example we were not exactly trying to find the Nth harmonic number, the sum of the reciprocals of the first N positive numbers. Instead, we were using these integer reciprocals rounded to 32 bits as our input. Think of them as just data. The data happened to be produced by computing reciprocals and rounding the result to 32-bit floating point accuracy.

But what if we did want to compute the 100,000th harmonic number? There are methods for computing harmonic numbers directly, as described here.

Related posts

[1] The condition number of a problem is basically a measure of how much floating point errors are multiplied in the process of computing a result. A well-conditioned problem has a small condition number, and a ill-conditioned problem has a large condition number. With ill-conditioned problems, you may have to use extra precision in your intermediate calculations to get a result that’s accurate to ordinary precision.

[2] Handbook of Floating-Point Arithmetic by Jen-Michel Muller et al.

Arbitrary precision is not a panacea

Having access to arbitrary precision arithmetic does not necessarily make numerical calculation difficulties go away. You still need to know what you’re doing.

Before you extend precision, you have to know how far to extend it. If you don’t extend it enough, your answers won’t be accurate enough. If you extend it too far, you waste resources. Maybe you’re OK wasting resources, so you extend precision more than necessary. But how far is more than necessary?

As an example, consider the problem of finding values of n such that tan(n) > n discussed a couple days ago. After writing that post, someone pointed out that these values of n are one of the sequences on OEIS. One of the values on the OEIS site is

k = 1428599129020608582548671.

Let’s verify that tan(k) > k. We’ll use our old friend bc because it supports arbitrary precision. Our k is a 25-digit number, so lets tell bc we want to work with 30 decimal places to give ourselves a little room. (bc automatically gives you a little more room than you ask for, but we’ll ask for even more.)

bc doesn’t have a tangent function, but it does have s() for sine and c() for cosine, so we’ll compute tangent as sine over cosine.

    $ bc -l
    scale = 30
    k = 1428599129020608582548671
    s(k)/c(k) - k

We expect this to return a positive value, but instead it returns

-1428599362980017942210629.31…

So is OEIS wrong? Or is bc ignoring our scale value? It turns out neither is true.

You can’t directly compute the tangent of a large number. You use range reduction to reduce it to the problem of computing the tangent of a small angle where your algorithms will work. Since tangent has period π, we reduce k mod π by computing k – ⌊k/π⌋π. That is, we subtract off as many multiples of π as we can until we get a number between 0 and π. Going back to bc, we compute

    pi = 4*a(1)
    k/pi

This returns

    454737226160812402842656.500000507033221370444866152761

and so we compute the tangent of

    t = 0.500000507033221370444866152761*pi
      = 1.570797919686740002588270178941

Since t is slightly larger than π/2, the tangent will be negative. We can’t have tan(t) greater than k because tan(t) isn’t even greater than 0. So where did things break down?

The calculation of pi was accurate to 30 significant figures, and the calculation of k/pi was accurate to 30 significant figures, given our value of pi. So far has performed as promised.

The computed value of k/pi is correct to 29 significant figures, 23 before the decimal place and 6 after. So when we take the fractional part, we only have six significant figures, and that’s not enough. That’s where things go wrong. We get a value ⌊k/π⌋ that’s greater than 0.5 in the seventh decimal place while the exact value is less than 0.5 in the 25th decimal place. We needed 25 – 6 = 19 more significant figures.

This is the core difficulty with floating point calculation: subtracting nearly equal numbers loses precision. If two numbers agree to m decimal places, you can expect to lose m significant figures in the subtraction. The error in the subtraction will be small relative to the inputs, but not small relative to the result.

Notice that we computed k/pi to 29 significant figures, and given that output we computed the fractional part exactly. We accurately computed ⌊k/π⌋π, but lost precision when we computed we subtracted that value from k.

Our error in computing k – ⌊k/π⌋π was small relative to k, but not relative to the final result. Our k is on the order of 1025, and the error in our subtraction is on the order of 10-7, but the result is on the order of 1. There’s no bug in bc. It carried out every calculation to the advertised accuracy, but it didn’t have enough working precision to produce the result we needed.

Related posts

Higher order Newton-like methods for root finding

There are variations on Newton’s root finding method that use higher derivatives and converge faster. Alston Householder developed a sequence of such methods of the form

H_n(x) = x + (n-1) \frac{\left( \frac{1}{f(x)}\right)^{(n-2)}}{\left( \frac{1}{f(x)}\right)^{(n-1)}}

where the superscripts in parentheses indicate derivatives. When n = 2, Householder’s method reduces to Newton’s method.

Once Newton’s method is close enough to a root, the error is squared at each iteration. Said another way, the number of correct decimal places roughly doubles. With the Householder method of order n, the number of correct decimal places roughly increases by a factor of n.

The Householder method Hn can be written as a rational function of f and its derivatives up to order n-1, though the formulas are complicated.

Maybe it’s time for Householder’s methods to be more widely used: with automatic differentiation [1], the complexity of computing higher order derivatives is not the drawback it was when these computations were done solely by hand.

Demonstration

We’ll illustrate the convergence of the Householder methods by finding roots of the quintic polynomial

f(x) = x5 + ax + b.

I used Mathematica to calculate expressions for the Householder methods and export the results as code. Mathematica does not have a function to export Python code, but it does have a function CForm to export expressions as C code. I started to use this, converting exponents from C to Python form with search and replace. Then I tried using FortranForm instead, and that was much easier because Fortran and Python both represent powers with a ** operator.

def f(x, a, b):
    return x**5 + a*x + b

def H2(x, a, b):
    return x - (b + a*x + x**5)/(a + 5*x**4)
    
def H3(x, a, b):
    num = (a + 5*x**4)*(b + a*x + x**5)
    den = a**2 - 10*b*x**3 + 15*x**8
    return x - num/den

def H4(x, a, b):
    num = (b + a*x + x**5)*(-a**2 + 10*b*x**3 - 15*x**8)
    den = a**3 + 10*b**2*x**2 + 5*a**2*x**4 - 80*b*x**7 - 25*a*x**8 + 35*x**12
    return x + num/den

To set values of a and b, I arbitrarily chose a = 2 and solved for b so that π is a root. That way we have a known value of the root for comparison. I used x = 4 as my starting guess at the root.

I first used -log10|x – π| to measure the number of correct decimals. That didn’t quite work because the method would converge exactly (to the limits of machine precision) which led to taking the log of zero. So I modified my accuracy function to the following.

def decimals(x):
    d = abs(x - pi)
    if d != 0:
        return( round(-log10(d),2) )
    else:
        return "inf"

Then I tested each of the Householder methods with the following.

epsilon = 1e-14
for H in [H2, H3, H4]:
    x = 4
    for i in range(1, 10):
        if abs(f(x,a,b)) > epsilon:
            x = H(x,a,b)
            print(i, x, decimals(x))

Here are the results in tabular form. I replaced all the decimals after the first incorrect decimal with underscores to make it easier to see the convergence.

|---+-------------------+--------|
| i |                 x | digits |
|---+-------------------+--------|
| 1 | 3.4______________ |   0.53 |
| 2 | 3.18_____________ |   1.33 |
| 3 | 3.142____________ |   2.87 |
| 4 | 3.141593_________ |   5.93 |
| 5 | 3.141592653590___ |  12.07 |
| 6 | 3.141592653589793 |    inf |
|---+-------------------+--------|
| 1 | 3.2______________ |   1.11 |
| 2 | 3.1416___________ |   4.03 |
| 3 | 3.1415926535899__ |  12.79 |
| 4 | 3.141592653589793 |    inf |
|---+-------------------+--------|
| 1 | 3.15_____________ |   1.84 |
| 2 | 3.141592654______ |   8.85 |
| 3 | 3.141592653589793 |    inf |
|---+-------------------+--------|

Note that, as promised, the number of correct decimal places for Hn increases by roughly a factor of n with each iteration.

Update: See this post where the symbolic calculations are done in SymPy rather than Mathematica. Everything stays in Python, and there’s no need to edit the code for the Householder methods.

Related posts

[1] Note that in this post I used Mathematica to do symbolic differentiation, not automatic differentiation. Automatic differentiation differs from both symbolic and numerical differentiation. See an explanation here.

Progress on the Collatz conjecture

The Collatz conjecture is for computer science what until recently Fermat’s last theorem was for mathematics: a famous unsolved problem that is very simple to state.

The Collatz conjecture, also known as the 3n+1 problem, asks whether the following function terminates for all positive integer arguments n.

    def collatz(n):
        if n == 1:
            return 1
        elif n % 2 == 0: 
            return collatz(n/2)
        else:
            return collatz(3*n+1)

In words, this says to start with a positive integer. Repeatedly either divide it by 2 if it’s even, or multiply it by 3 and add 1 if it’s odd. Will this sequence always reach 1?

The Collatz conjecture is a great example of how hard it can be to thoroughly understand even a few lines of code.

Terence Tao announced today that he has new partial results toward proving the Collatz conjecture. His blog post and arXiv paper are both entitled “Almost all Collatz orbits attain almost bounded values.”

When someone like Tao uses the word “almost,” it is a term of art, a common word used as a technical term. He is using “almost” as it is used as a technical term in number theory, which is different from the way the word is used technically in measure theory.

I get email routinely from people who believe they have a proof of the Collatz conjecture. These emails are inevitably from amateurs. The proofs are always short, elementary, and self-contained.

The contrasts with Tao’s result are stark. Tao has won the Fields Medal, arguably the highest prize in mathematics [1], and a couple dozen other awards. Amateurs can and do solve open problems, but it’s uncommon.

Tao’s proof is 48 pages of dense, advanced mathematics, building on the work of other researchers. Even so, he doesn’t claim to have a complete proof, but partial results. That is how big conjectures typically fall: by numerous people chipping away at them, building on each other’s work.

Related posts

[1] Some say the Abel prize is more prestigious because it’s more of a lifetime achievement award. Surely Tao will win that one too when he’s older.

Asimov’s question about π

In 1977, Isaac Asimov [1] asked how many terms of the slowly converging series

π = 4 – 4/3 + 4/5 – 4/7 + 4/9 – …

would you have to sum before doing better than the approximation

π ≈ 355/113.

A couple years later Richard Johnsonbaugh [2] answered Asimov’s question in the course of an article on techniques for computing the sum of series. Johnsonbaugh said you would need at least N = 3,748,630 terms.

Johnsonbaug’s answer is based on exact calculations. I wondered how well you’d do with N terms using ordinary floating point arithmetic. Would there be so much rounding error that the result is terrible?

I wrote the most direct implementation in Python, with no tricks to improve the accuracy.

    from math import pi
    s = 0
    N = 3748630
    for n in range(1, N+1):
        s += (-1)**(n+1) * 4/(2*n - 1)

I intended to follow this up by showing that you could do better by summing all the positive and negative terms separately, then doing one subtraction at the end. But the naive version actually does quite well. It’s essentially as accurate as 355/113, with both approximations having an error of 2.66764 × 10-7.

Extended precision with bc

Next, I translated my program to bc [3] so I could control the precision. bc lets you specify your desired precision with its scale parameter.

    scale = 20
    pi = 4*a(1)
    s = 0
    m = 3748630
    for (n = 1; n <= m; n++) {
        s += (-1)^(n+1) * 4/(2*n - 1)
    }

Floating point precision is between 15 and 16 decimal places. I added more precision by setting set scale to 20, i.e. carrying out calculations to 20 decimal places, and summed the series again.

The absolute error in the series was less than the error in 355/113 in the 14th decimal place. When I used one less term in the series, its error was larger than the error in 355/113 in the 14th decimal place. In other words, the calculations suggest Johnsonbaugh found exactly the minimum number of terms needed.

I doubt Johnsonbaugh ever verified his result computationally. He doesn’t mention computer calculations in his paper [4], and it would have been difficult in 1979 to have access to the necessary hardware and software.

If he had access to an Apple II at the time, it would have run at 1 MHz. My calculation took around a minute to run on a 2 GHz laptop, so I’m guessing the same calculation would have taken a day or two on an Apple II. This assumes he could find extended precision software like bc on an Apple II, which is doubtful.

The bc programming language had been written four years earlier, so someone could have run a program like the one above on a Unix machine somewhere. However, such machines were hard to find, and their owners would have been reluctant to give up a couple days of compute time for a guest to run a frivolous calculation.

Related posts

[1] Isaac Asimov, Asimov on Numbers, 1977.

[2] Richard Johnsonbaugh, Summing an Alternating Series. The American Mathematical Monthly, Vol 86, No 8, pp.637–648.

[3] Notice that N from the Python program became m in bc. I’ve used bc occasionally for years, and didn’t know until now that you cannot use capital letters for variables in standard bc. I guess I never tried before. The next post explains why bc doesn’t allow capital letters in variable names.

[4] Johnsonbaugh’s paper does include some numerical calculations, but he only sums up 500 terms, not millions of terms, and it appears he only uses ordinary precision, not extended precision.

Lebesgue’s proof of Weierstrass’ theorem

A couple weeks ago I wrote about the Weierstrass approximation theorem, the theorem that says every continuous function on a closed finite interval can be approximated as closely as you like by a polynomial.

The post mentioned above uses a proof by Bernstein. And in that post I used the absolute value function as an example. Not only is |x| an example, you could go the other way around and use it as a step in the proof. That is, there is a proof of the Weierstrass approximation theorem that starts by proving the special case of |x| then use that result to build a proof for the general case.

There have been many proofs of Weierstrass’ theorem, and recently I ran across a proof due to Lebesgue. Here I’ll show how Lebesgue constructed a sequence of polynomials approximating |x|. It’s like pulling a rabbit out of a hat.

The staring point is the binomial theorem. If x and y are real numbers with |x| > |y| and r is any real number, then

(x+y)^r & =\sum_{k=0}^\infty {r \choose k} x^{r-k} y^k.

Now apply the theorem substituting 1 for x and x² – 1 for y above and you get

|x| = (1 + (x^2 - 1))^{1/2} =\sum_{k=0}^\infty {1/2 \choose k} (x^2-1)^k

The partial sums of the right hand side are a sequence of polynomials converging to |x| on the interval [-1, 1].

***

If you’re puzzled by the binomial coefficient with a top number that isn’t a positive integer, see the general definition of binomial coefficients. The top number can even be complex, and indeed the binomial theorem holds for complex r.

You might also be puzzled by the binomial theorem being an infinite sum. Surely if r is a positive integer we should get the more familiar binomial theorem which is a finite sum. And indeed we do. The general definition of binomial coefficients insures that if r is a positive integer, all the binomial coefficients with k > r are zero.