John D. Cook https://www.johndcook.com/blog Applied Mathematics Consulting Tue, 14 Jul 2020 12:48:55 +0000 en-US hourly 1 https://www.johndcook.com/blog/wp-content/uploads/2020/01/cropped-favicon_512-32x32.png John D. Cook https://www.johndcook.com/blog 32 32 Eliminating polynomial terms https://www.johndcook.com/blog/2020/07/14/eliminating-polynomial-terms/ https://www.johndcook.com/blog/2020/07/14/eliminating-polynomial-terms/#respond Tue, 14 Jul 2020 12:48:55 +0000 https://www.johndcook.com/blog/?p=57587 The first step in solving a cubic equation is to apply a change of variables to reduce an equation of the form

x³ + bx² + cx + d = 0

to one of the form

y³ + py + q = 0.

This process can be carried further through Tschirnhausen transformations, a generalization of an idea going back to Ehrenfried Walther von Tschirnhaus in 1683.

For a polynomial of degree n > 4, a Tschirnhausen transformations is a rational change of variables

y = g(x) / h(x)

turning the equation

xn + an-1 xn-1 + an-2 xn-2 + … + a0 = 0

into

yn + bn-4 yn-4 + bn-5 yn-5 + … + b0 = 0

where the denominator h(x) of the transformation is not zero at any root of the original equation.

I believe the details of how to construct the transformations are in An essay on the resolution of equations by G. B. Jerrad.

Related posts

]]>
https://www.johndcook.com/blog/2020/07/14/eliminating-polynomial-terms/feed/ 0
Leapfrog integrator https://www.johndcook.com/blog/2020/07/13/leapfrog-integrator/ https://www.johndcook.com/blog/2020/07/13/leapfrog-integrator/#comments Mon, 13 Jul 2020 21:51:31 +0000 https://www.johndcook.com/blog/?p=57589 The so-called “leapfrog” integrator is a numerical method for solving differential equations of the form

x'' = f(x)

where x is a function of t. Typically x is position and t is time.

This form of equation is common for differential equations coming from mechanical systems. The form is more general than it may seem at first. It does not allow terms involving first-order derivatives, but these terms can often be eliminated via a change of variables. See this post for a way to eliminate first order terms from a linear ODE.

The leapfrog integrator is also known as the Störmer-Verlet method, or the Newton-Störmer-Verlet method, or the Newton-Störmer-Verlet-leapfrog method, or …

The leapfrog integrator has some advantages over, say, Runge-Kutta methods, because it is specialized for a particular (but important) class of equations. For one thing, it solves the second order ODE directly. Typically ODE solvers work on (systems of) first order equations: to solve a second order equation you turn it into a system of first order equations by introducing the first derivative of the solution as a new variable.

For another thing, it is reversible: if you advance the solution of an ODE from its initial condition to some future point, make that point your new initial condition, and reverse time, you can step back to exactly where you started, aside from any loss of accuracy due to floating point; in exact arithmetic, you’d return to exactly where you started.

Another advantage of the leapfrog integrator is that it approximately conserves energy. The leapfrog integrator could perform better over time compared to a method that is initially more accurate.

Here is the leapfrog method in a nutshell with step size h.

\begin{align*} x_{i+1} &= x_i + v_i h + \frac{1}{2} f(x_i) h^2 \\ v_{i+i} &= v_i + \frac{1}{2}\left(f(x_i) + f(x_{i+1})\right) h \end{align*}

And here’s a simple Python demo.

    import numpy as np
    import matplotlib.pyplot as plt
    
    # Solve x" = f(x) using leapfrog integrator
    
    # For this demo, x'' + x = 0
    # Exact solution is x(t) = sin(t)
    def f(x):
        return -x
    
    k = 5               # number of periods
    N = 16              # number of time steps per period
    h = 2*np.pi/N       # step size
    
    x = np.empty(k*N+1) # positions
    v = np.empty(k*N+1) # velocities
    
    # Initial conditions
    x[0] = 0
    v[0] = 1
    anew = f(x[0])
    
    # leapfrog method
    for i in range(1, k*N+1):
        aold = anew
        x[i] = x[i-1] + v[i-1]*h + 0.5*aold*h**2
        anew = f(x[i])
        v[i] = v[i-1] + 0.5*(aold + anew)*h

Here’s a plot of the solution over five periods.

There’s a lot more I hope to say about the leapfrog integrator and related methods in future posts.

More on ODE solvers

]]>
https://www.johndcook.com/blog/2020/07/13/leapfrog-integrator/feed/ 1
Counterexample to Dirichlet principle https://www.johndcook.com/blog/2020/07/11/dirichlet-principle-counterexample/ https://www.johndcook.com/blog/2020/07/11/dirichlet-principle-counterexample/#respond Sat, 11 Jul 2020 19:53:24 +0000 https://www.johndcook.com/blog/?p=57507 Let Ω be an open set in some Euclidean space and v a real-valued function on Ω.

Dirichlet principle

Dirichlet’s integral for v, also called the Dirichlet energy of v, is

\int_\Omega \frac{1}{2} | \nabla v |^2

Among functions with specified values on the boundary of Ω, Dirichlet’s principle says that minimizing Dirichlet’s integral is equivalent to solving Laplace’s equation.

In a little more detail, let g be a continuous function on the boundary ∂Ω of the region Ω. A function u has minimum Dirichlet energy, subject to the requirement that u = g on ∂Ω, if and only if u solves Laplace’s equation

\Delta; u = 0

subject to the same boundary condition.

Dirichlet’s principle requires some hypotheses not stated here, as Hadamard’s example below shows.

Hadamard’s example

Let g(θ) be the function [1]

g(\theta) = \sum_{n=1}^\infty \frac{\sin n!\theta}{n^2}

The function g is continuous and so there exists a unique solution to Laplace’s equation on the unit disk with boundary values given by g, but the Dirichlet energy of the solution diverges.

The solution, in polar coordinates, is

u(r, \theta) = \sum_{n=1}^\infty r^{n!} \,\,\frac{\sin n!\theta}{n^2}

The Laplace operator in polar coordinates is

\frac{1}{r} \frac{\partial }{\partial r}\left(r \frac{\partial u}{\partial r} \right) + \frac{1}{r^2} \frac{\partial^2 u}{\partial \theta^2}

and you can differentiate u term by-term to show that it satisfies Laplace’s equation.

Dirichlet’s integral in polar coordinates is

\int_0^{2\pi} \int_0^1 \frac{1}{2} \left\{ \left( \frac{\partial u}{\partial r}\right)^2 + \frac{1}{r^2}\left(\frac{\partial f}{\partial \theta}\right)^2 \right\} \, r\,dr\,d\theta

Integrating term-by-term, the nth term in the series for the Dirichlet energy in Hadamard’s example is

\frac{(n!)^2}{2n^4(2n! - 1)}

and so the series rapidly diverges.

Dirichlet’s principle requires that there be at least one function satisfying the specified boundary conditions that has finite Dirichlet energy. In the example above, the solution to Laplace’s equation with boundary condition g has infinite Dirichlet energy. It turns out the same is true for every function satisfying the same boundary condition, whether it satisfies Laplace’s equation or not.

Related posts

[1] What is the motivation for this function? The function is given by a lacunary series, a Fourier series with increasingly large gaps between the frequency components. The corresponding series for u cannot be extended to an analytic function outside the closed unit circle. If it could be so extended, Dirichlet’s principle would apply and the example wouldn’t work.

]]>
https://www.johndcook.com/blog/2020/07/11/dirichlet-principle-counterexample/feed/ 0
Software analysis and synthesis https://www.johndcook.com/blog/2020/07/09/software-analysis-and-synthesis/ https://www.johndcook.com/blog/2020/07/09/software-analysis-and-synthesis/#comments Thu, 09 Jul 2020 17:21:45 +0000 https://www.johndcook.com/blog/?p=57398 People who haven’t written large programs think that writing software is easy. All you have to do is break a big problem into smaller problems until you have something so small that it’s easy to program.

The problem is putting the pieces back together. If you’ve only written small programs, you haven’t had many pieces to put together. It’s harder to put the pieces together when you write a large program by yourself. It’s even harder when you work on a large program with other people.

Synthesis is harder than analysis. Or as Perdita Stevens put it, integration is harder than separation.

The image above is a screenshot from her keynote at the RC2020 conference on reversible computation.

Related post: The cost of taking things apart and putting them back together.

]]>
https://www.johndcook.com/blog/2020/07/09/software-analysis-and-synthesis/feed/ 4
COVID19 mortality per capita by state https://www.johndcook.com/blog/2020/07/08/covid19-mortality-per-capita-by-state/ Wed, 08 Jul 2020 16:17:14 +0000 https://www.johndcook.com/blog/?p=57337 Here’s a silly graph by Richard West with a serious point. States with longer names tend to have higher covid19 mortality. Of course no one believes there’s anything about the length of a state’s name that should impact the health of its residents. The correlation is real, but it’s a coincidence.

The variation between mortality in different states is really large. Something caused that, though not the length of the names. But here’s the kicker: you may come up with an explanation that’s much more plausible than length of name, and be just as wrong. Discovering causation is hard work, much harder than looking for correlations.

]]>
Morse code golf https://www.johndcook.com/blog/2020/07/07/morse-code-golf/ https://www.johndcook.com/blog/2020/07/07/morse-code-golf/#comments Tue, 07 Jul 2020 14:11:00 +0000 https://www.johndcook.com/blog/?p=57276 You can read the title of this post as ((Morse code) golf) or as (Morse (code golf)).

Morse code is a sort of approximate Huffman coding of letters: letters are assigned symbols so that more common letters can be transmitted more quickly. You can read about how well Morse code achieves this design objective here.

But digits in Morse code are kinda strange. I imagine they were an afterthought, tacked on after encodings had been assigned to each of the letters, and so had to avoid encodings that were already in use. Here are the assignments:

    |-------+-------|
    | Digit | Code  |
    |-------+-------|
    |     1 | .---- |
    |     2 | ..--- |
    |     3 | ...-- |
    |     4 | ....- |
    |     5 | ..... |
    |     6 | -.... |
    |     7 | --... |
    |     8 | ---.. |
    |     9 | ----. |
    |     0 | ----- |
    |-------+-------|

There’s no attempt to relate transmission length to frequency. Maybe the idea was that all digits are equally common. While in some contexts this is true, it’s not true in general for mathematical and psychological reasons.

There is a sort of mathematical pattern to the Morse code symbols for digits. For 1 ≤ n ≤ 5, the symbol for n is n dots followed by 5-n dashes. For 6 ≤ n ≤ 9, the symbol is n-5 dashes followed by 10-n dots. The same rule extends to 0 if you think of 0 as 10.

A more mathematically satisfying way to assign symbols would have been binary numbers padded to five places:

    0 -> .....
    1 -> ....-
    2 -> ..._.
    etc.

Because the Morse encoding of digits is awkward, it’s not easy to describe succinctly. And here is where golf comes in.

The idea of code golf is to write the shortest program that does some task. Fewer characters is better, just as in golf the lowest score wins.

Here’s the challenge: Write two functions as small you can, one to encode digits as Morse code and another to decode Morse digits. Share your solutions in the comments below.

Related posts

]]>
https://www.johndcook.com/blog/2020/07/07/morse-code-golf/feed/ 18
Squircle corner radius https://www.johndcook.com/blog/2020/07/05/squircle-corner-radius/ https://www.johndcook.com/blog/2020/07/05/squircle-corner-radius/#respond Mon, 06 Jul 2020 02:05:12 +0000 https://www.johndcook.com/blog/?p=57208 I’ve written several times about the “squircle,” a sort of compromise between a square and a circle. It looks something like a square with rounded corners, but it’s not. Instead of having flat sizes (zero curvature) and circular corners (constant positive curvature), the curvature varies continuously.

A natural question is just what kind of circle approximates the corners. This post answers that question, finding the radius of curvature of the osculating circle.

The squircle has a parameter p which determines how close the curve is to a circle or a square.

|x|^p + |y|^p = 1

The case p = 2 corresponds to a circle, and in the limit as p goes to infinity you get a square.

We’ll work in the first quadrant so we can ignore absolute values. The curvature at each point is complicated [1] but simplifies in the corner to

2^{\frac{1}{p} - \frac{1}{2}} (p-1)

and the radius of curvature is the reciprocal of this. So for moderately large p, the radius of curvature is approximately √2/(p-1).

In the image at the top of the post, p = 3.5. Here’s an image with a larger value of p, p = 10.

And here’s one with a smaller value, p = 2.5.

When p = 2 we get a circle. When p is between 1 and 2 we get more of a diamond than a square. Notice in the image below with p = 1.5 the osculating circle is larger than the squircle, and the “corner” is nearly the whole side.

Finally, for p between 0 and 1 the sides of the diamond cave in giving a concave shape. Now the osculating circle is on the outside.

Related posts

[1] The general expression is

\frac{(p-1) (x y)^{p+1} \left(x^p+y^p\right)}{\left(y^2 x^{2 p}+x^2 y^{2 p}\right)^{3/2}}

]]>
https://www.johndcook.com/blog/2020/07/05/squircle-corner-radius/feed/ 0
Triple words https://www.johndcook.com/blog/2020/07/02/triple-words/ https://www.johndcook.com/blog/2020/07/02/triple-words/#respond Thu, 02 Jul 2020 14:09:32 +0000 https://www.johndcook.com/blog/?p=57064 A couple days ago I wrote a post about some doubled words I found on my site. Someone asked about triple words, so I looked. Here are some of the things I found.

One example was a post where I commented on a song from Fiddler on the Roof where Teva sings

If I were a rich man,
Yubba dibby dibby dibby dibby dibby dibby dum.

Another example is a post on cryptography describes a very large number as “over 400 million million million million.”

As I mentioned in the post on double words, logarithms of logarithms come up often in number theory. For example, the time it would take Shor’s algorithm to factor an n-digit number is

O( log(n)² log(log(n)) log(log(log(n))) )

I mentioned that in a post and added in a footnote

Obligatory old joke: What sound does a number theorist make when drowning? log log log …

Finally, I wrote a post called Gamma gamma gamma!, an allusion to the WWII film Tora! Tora! Tora!. The post explains how the gamma function, the gamma constant, and the gamma distribution are all related.

]]>
https://www.johndcook.com/blog/2020/07/02/triple-words/feed/ 0
Kissing circle https://www.johndcook.com/blog/2020/07/02/kissing-circle/ https://www.johndcook.com/blog/2020/07/02/kissing-circle/#respond Thu, 02 Jul 2020 13:36:04 +0000 https://www.johndcook.com/blog/?p=57060 Curvature is a measure of how tightly a curve bends. A circle of radius r has curvature 1/r. So a small circle has high curvature and a big circle has small curvature.

In general the curvature of a curve at a point is defined to be the curvature of the circle that best fits at that point. This circle is called the osculating circle which means the circle that “kisses” the curve at that point.

From Online Etymology Dictionary:

osculate (v.) “to kiss (one another),” 1650s, from Latin osculatus, past participle of osculari “to kiss,” from osculum “a kiss; pretty mouth, sweet mouth,” literally “little mouth,” diminutive of os “mouth”

The center of the osculating circle is called the center of curvature and the radius of the circle is called the radius of curvature.

I’ll give two examples. The first is a set of ellipses that all have the unit circle as their osculating circle on the left end. So they all have curvature 1.

An ellipse with equation

\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1

has varying curvature at each point. At x = –a the curvature simplifies to a/b². So to make the graph above, I used a range of values for a and set each corresponding value of b to √a.

Next, instead of fixing the osculating circle I’ll fix the curve. I’ll use the equation to fit an egg that I’ve used before and plot the osculating circles at each end. The plot below uses a = 3, b = 2, and k = 0.1.

The curvature at the fat end is a(1-ka)/b² and the curvature at the pointy end is a(1+ka)/b². These are derived here.

Setting k = 0 gives the curvature of an ellipse at each end used above.

More on curvature

]]>
https://www.johndcook.com/blog/2020/07/02/kissing-circle/feed/ 0
Double words https://www.johndcook.com/blog/2020/06/30/double-words/ https://www.johndcook.com/blog/2020/06/30/double-words/#comments Tue, 30 Jun 2020 16:56:14 +0000 https://www.johndcook.com/blog/?p=56948 Double words such as “the the” are a common source of writing errors. On the other hand, some doubled words are legitimate. You might, for example, find “had had” or “that that” in a grammatically correct sentence.

I’ve been looking through my web site to purge erroneous double words, and found a few doubles that are correct in context but would probably be incorrect elsewhere.

In ordinary English prose, long long is probably not what the author intended. There should either be a comma between the two words or a different choice of words. But in C code snippets, you’ll see long long as a type of integer. Also, it is common in many programming languages for a type and a variable to have the same name with varying capitalization, such as FILE file in C.

There are several pages on my site that refer to the Blum Blum Shub cryptographic random number generator. (The name of this algorithm always makes me think of a line from Night at the Museum.)

There are several pages on this site that use log log, always in the context of number theory. Logarithms of logarithms come up frequently in that context.

I also refer to unknown unknowns. The press ridiculed Donald Rumsfeld mercilessly when he first used this expression, but now the phrase is commonly used because more people understand that it names an important concept. It comes up frequently in statistics because so much attention is focused on known unknowns, even though unknown unknowns are very often the weakest link.

***

By the way, if you’d like to make a list of doubled words in a file, you could run the following shell one-liner:

   egrep -i -o '\<([a-z]+) \1\>' myfile | sort | uniq > doubles

I used something like this on a backup of my site to search for doubled words.

]]>
https://www.johndcook.com/blog/2020/06/30/double-words/feed/ 4
Approximating rapidly divergent integrals https://www.johndcook.com/blog/2020/06/29/approximating-rapidly-divergent-integrals/ https://www.johndcook.com/blog/2020/06/29/approximating-rapidly-divergent-integrals/#comments Mon, 29 Jun 2020 19:25:09 +0000 https://www.johndcook.com/blog/?p=56817 A while back I ran across a paper [1] giving a trick for evaluating integrals of the form

I(M) = \int_a^M \exp(f(x)) \, dx

where M is large and f is an increasing function. For large M, the integral is asymptotically

A(M) = \frac{\exp(f(M))}{f'(M)}.

That is, the ratio of A(M) to I(M) goes to 1 as M goes to infinity.

This looks like a strange variation on Laplace’s approximation. And although Laplace’s method is often useful in practice, no applications of the approximation above come to mind. Any ideas? I have a vague feeling I could have used something like this before.

There is one more requirement on the function f. In addition to being an increasing function, it must also satisfy

\lim_{x\to\infty} \frac{f''(x)}{(f'(x))^2} = 0.

In [1] the author gives several examples, including using f(x) = x². If we wanted to approximate

\int_0^100 \exp(x^2)\, dx

the method above gives

exp(10000)/200 = 4.4034 × 104340

whereas the correct value to five significant figures is 4.4036 × 104340.

Even getting an estimate of the order of magnitude for such a large integral could be useful, and the approximation does better than that.

[1] Ira Rosenholtz. Estimating Large Integrals: The Bigger They Are, The Harder They Fall. The College Mathematics Journal, Vol. 32, No. 5 (Nov., 2001), pp. 322-329

]]>
https://www.johndcook.com/blog/2020/06/29/approximating-rapidly-divergent-integrals/feed/ 2
Best approximation of a catenary by a parabola https://www.johndcook.com/blog/2020/06/29/parabola-catenary/ https://www.johndcook.com/blog/2020/06/29/parabola-catenary/#comments Mon, 29 Jun 2020 15:20:43 +0000 https://www.johndcook.com/blog/?p=56807 A parabola and a catenary can look very similar but are not the same. The graph of

y = x²

is a parabola and the graph of

y = cosh(x) = (ex + ex)/2

is a catenary. You’ve probably seen parabolas in a math class; you’ve seen a catenary if you’ve seen the St. Louis arch.

Depending on the range and scale, parabolas and catenaries can be too similar to distinguish visually, though over a wide range enough range the exponential growth of the catenary becomes apparent.

For example, for x between -1 and 1, it’s possible to scale a parabola to match a catenary so well that the graphs practically overlap. The blue curve is a catenary and the orange curve is a parabola.

The graph above looks orange because the latter essentially overwrites the former. The relative error in approximating the catenary by the parabola is about 0.6%.

But when x ranges over -10 to 10, the best parabola fit is not good at all. The catenary is much flatter in the middle and much steeper in the sides. On this wider scale the hyperbolic cosine function is essentially e|x|.

Here’s an intermediate case, -3 < x < 3, where the parabola fits the catenary pretty well, though one can easily see that the curves are not the same.

Now for some details. How are we defining “best” when we say best fit, and how do we calculate the parameters for this fit?

I’m using a least-squares fit, minimizing the L² norm of the error, over the interval [M, M]. That is, I’m approximating

cosh(x)

with

c + kx²

and finding c and k that minimize the integral

\int_{-M}^M (\cosh(x) - c - kx^2)^2\, dx

The optimal values of c and k vary with M. As M increases, c decreases and k increases.

It works out that the optimal value of c is

-\frac{3 \left(M^2 \sinh (M)+5 \sinh (M)-5 M \cosh (M)\right)}{2 M^3}

and the optimal value of k is

\frac{15 \left(M^2 \sinh (M)+3 \sinh (M)-3 M \cosh (M)\right)}{2 M^5}

Here’s a log-scale plot of the L² norm of the error, the square root of the integral above, for the optimal parameters as a function of M.

More on catenaries

]]>
https://www.johndcook.com/blog/2020/06/29/parabola-catenary/feed/ 5
Five places the Sierpiński triangle shows up https://www.johndcook.com/blog/2020/06/28/the-ubiquitous-sierpinski/ https://www.johndcook.com/blog/2020/06/28/the-ubiquitous-sierpinski/#comments Sun, 28 Jun 2020 11:35:59 +0000 https://www.johndcook.com/blog/?p=56748 The Sierpiński triangle is a fractal that comes up in unexpected places. I’m not that interested in fractals, and yet I’ve mentioned the Sierpiński triangle many times on this blog just because I run into it while looking at something else.

The first time I wrote about the Sierpiński triangle was when it came up in the context of a simple random process called the chaos game.

Unbiased chaos game results

Next I ran into Sierpiński in the context of cellular automata, specifically Rule 90. A particular initial condition for this rule leads to the image below. With other initial conditions you don’t get such a clean Sierpiński triangle, but you do get similar variations on the theme.

Rule 90 with one initial bit set

Next I ran into Sierpiński in the context of low-level programming. The following lines of C code prints an asterisk when the bit-wise and of two numbers is non-zero.

    for (int i = 0; i < N; i++) {
        for (int j = 0; j < N; j++)
            printf("%c", (i&j) ? ' ' : '*');
        printf("\n");
    }

A screenshot of the output shows our familiar triangle.

screen shot that looks like Sierpinski triangle

Then later I wrote a post looking at constructible n-gons, n-sided figures that can be constructed using only a straight edge and a compass. These only exist for special values of n. If you write these special values in binary, and replace the 1’s with a black square and the 0’s with a blank, you get yet another Sierpiński triangle.

Finally, if you look at the odd numbers in Pascal’s triangle, they also form a Sierpiński triangle.

]]>
https://www.johndcook.com/blog/2020/06/28/the-ubiquitous-sierpinski/feed/ 4
Evolute of an egg https://www.johndcook.com/blog/2020/06/25/evolute-egg/ https://www.johndcook.com/blog/2020/06/25/evolute-egg/#respond Fri, 26 Jun 2020 02:14:49 +0000 https://www.johndcook.com/blog/?p=56638 The set of lines perpendicular to a curve are tangent to a second curve called the evolute. The lines perpendicular to the ellipse below are tangent to the curve inside called an astroid.

If we replace the ellipse with an egg, we get a similar shape, but less symmetric.

The equation for the egg is described here with parameters a = 3, b = 2, and k = 0.1. The ellipse above has the same a and b but k = 0.

I made the lines slightly transparent, setting alpha = 0.4, so the graph would be darker where many lines cross.

Related post: Envelopes of epicycloids

]]>
https://www.johndcook.com/blog/2020/06/25/evolute-egg/feed/ 0
Sample size calculation https://www.johndcook.com/blog/2020/06/25/sample-size-calculation/ https://www.johndcook.com/blog/2020/06/25/sample-size-calculation/#respond Thu, 25 Jun 2020 16:31:07 +0000 https://www.johndcook.com/blog/?p=56595 If you’re going to run a test on rabbits, you have to decide how many rabbits you’ll use. This is your sample size. A lot of what statisticians do in practice is calculate sample sizes.

A researcher comes to talk to a statistician. The statistician asks what effect size the researcher wants to detect. Do you think the new thing will be 10% better than the old thing? If so, you’ll need to design an experiment with enough subjects to stand a good chance of detecting a 10% improvement. Roughly speaking, sample size is inversely proportional to the square of effect size. So if you want to detect a 5% improvement, you’ll need 4 times as many subjects as if you want to detect a 10% improvement.

You’re never guaranteed to detect an improvement. The race is not always to the swift, nor the battle to the strong. So it’s not enough to think about what kind of effect size you want to detect, you also have to think about how likely you want to be to detect it.

Here’s what often happens in practice. The researcher makes an arbitrary guess at what effect size she expects to see. Then initial optimism may waver and she decides it would be better to design the experiment to detect a more modest effect size. When asked how high she’d like her chances to be of detecting the effect, she thinks 100% but says 95% since it’s necessary to tolerate some chance of failure.

The statistician comes back and says the researcher will need a gargantuan sample size. The researcher says this is far outside her budget. The statistician asks what the budget is, and what the cost per subject is, and then the real work begins.

The sample size the negotiation will converge on is the budget divided by the cost per sample. The statistician will fiddle with the effect size and probability of detecting it until the inevitable sample size is reached. This sample size, calculated to 10 decimal places and rounded up to the next integer, is solemnly reported with a post hoc justification containing no mention of budgets.

Sample size is always implicitly an economic decision. If you’re willing to make it explicitly an economic decision, you can compute the expected value of an experiment by placing a value on the possible outcomes. You make some assumptions—you always have to make assumptions—and calculate the probability under various scenarios of reaching each conclusion for various sample sizes, and select the sample size that leads to the best expected value.

More on experimental design

[1] There are three ways an A/B test can turn out: A wins, B wins, or there isn’t a clear winner. There’s a tendency to not think enough about the third possibility. Interim analysis often shuts down an experiment not because there’s a clear winner, but because it’s becoming clear there is unlikely to be a winner.

]]>
https://www.johndcook.com/blog/2020/06/25/sample-size-calculation/feed/ 0
Binomial coefficients mod primes https://www.johndcook.com/blog/2020/06/24/binomial-coefficients-mod-primes/ https://www.johndcook.com/blog/2020/06/24/binomial-coefficients-mod-primes/#comments Wed, 24 Jun 2020 13:31:22 +0000 https://www.johndcook.com/blog/?p=56531 Imagine seeing the following calculation:

{95 \choose 57} = {19\cdot 5 \choose 19\cdot 3} = {5 \choose 3} = \frac{5\cdot 4}{2\cdot 1} = 10

The correct result is

{95 \choose 57} = 487343696971437395556698010

and so the first calculation is off by 25 orders of magnitude.

But there’s a variation on the calculation above that is correct! A theorem by Édouard Lucas from 1872 that says for p prime and for any nonnegative integers m and n,

{pm \choose pn} = {m \choose n} \bmod p

So while the initial calculation was grossly wrong as stated, it is perfectly correct mod 19. If you divide 487343696971437395556698010 by 19 you’ll get a remainder of 10.

A stronger versions of Lucas’ theorem [1] says that if p is at least 5, then you can replace mod p with mod p³. This is a stronger conclusion because it says not only is the difference between the left and right side of the congruence divisible by p, it’s also divisible by p² and p³.

In our example, not only is the remainder 10 when 487343696971437395556698010 is divided by 19, the remainder is also 10 when dividing by 19² = 361 and 19³ = 6859.

More on binomial coefficients

[1] V. Brun, J. O. Stubban, J. E. Fjeldstad, L. Tambs, K. E. Aubert, W. Ljunggren, E. Jacobsthal. On the divisibility of the difference between two binomial coefficients, Den 11te Skandinaviske Matematikerkongress, Trondheim, 1949, 42–54.

]]>
https://www.johndcook.com/blog/2020/06/24/binomial-coefficients-mod-primes/feed/ 2
Surface of revolution with minimum area https://www.johndcook.com/blog/2020/06/21/minimal-surface-revolution/ https://www.johndcook.com/blog/2020/06/21/minimal-surface-revolution/#comments Sun, 21 Jun 2020 20:38:28 +0000 https://www.johndcook.com/blog/?p=56402 Suppose you’re given two points (x1, y1) and (x2, y2) with y1 and y2 positive. Find the smooth positive curve f(x) that passes through the two points such that the area of the surface formed by rotating the graph of f around the x-axis is minimized.

You can state this as a problem in calculus of variations, which leads to a differential equation, which leads to the solution

f(x) = c cosh((x + d)/c).

In other words, the surface area is minimized when the graph of f is a piece of a catenary [1].

This is interesting because the answer is not something you’re likely to guess, unlike say the isoperimetric problem, where the it’s easy to guess (but hard to prove) that the solution is a circle.

There’s also some interesting fine print to the solution. It’s not quite right to say that the solution is a catenary. To be more precise we should say that if there is a unique catenary that passes through both specified points, then it is the smooth curve with minimal area when rotated about the x-axis. But there are a couple things that could go wrong.

It’s possible that two catenaries pass through the given points, and in that case one of the catenaries leads to minimal surface area. But it’s also possible that there is no catenary passing through the given points.

My first thought would be that you could always find values of c and d so that the function f passes through the points (x1, y1) and (x2, y2), but that’s not true. Often you can, but if the difference in the y‘s is very high relative to the difference in the x‘s it might not be possible.

Suppose the graph of f passes through (0, 1) and (1, y2).

Since the graph passes through the first point, we have

c cosh(d/c) = 1.

Since cosh(x) ≥ 1, we must also have c ≤ 1. And since our curve is positive, we must have c > 0. We can maximize

c cosh((1 + d)/c)

for 0 < c ≤ 1 subject to the constraint

c cosh(d/c) = 1

to find the maximum possible value of y2. If we ask Mathematica

    NMaximize[
        {   c Cosh[(1 + d)/c], 
            {0 < c <= 1}, 
            {c Cosh[d/c] == 1}
        }, 
        {c, d}
    ]

we get

    {6.45659*10^8, {c -> 0.0352609, d -> -0.142316}}

meaning the largest possible value of y2 is 6.45659 × 108, and it occurs when c = 0.0352609, d = -0.142316.

Update: See the comment by Bill Smathers below arguing that the maximum should be unbounded. If the argument is correct, this would imply the code above ran into a numerical limitation.

Related posts

[1] See Calculus of Variations by I. M. Gelfand and S. V. Fomin.

]]>
https://www.johndcook.com/blog/2020/06/21/minimal-surface-revolution/feed/ 6
Chemical element frequency in writing https://www.johndcook.com/blog/2020/06/20/element-frequency/ https://www.johndcook.com/blog/2020/06/20/element-frequency/#comments Sat, 20 Jun 2020 20:41:39 +0000 https://www.johndcook.com/blog/?p=56329 How do the frequencies of chemical element names in English text compare to the abundance of elements in Earth’s crust? Do we write most frequently about the elements that appear most frequently?

It turns out the answer is “not really.” The rarest elements rarely appear in writing. We don’t have much to say about dysprosium, thulium, or lutetium, for example. But overall there’s only a small correlation between word frequency and chemical frequency. (The rank correlation is substantially higher than ordinary linear correlation.)

We write often about things like oxygen and iron because they’re such a part of the human experience. On the other hand, we care about some things like silver and gold precisely because they are rare.

Here are the most common elements according to text usage.

|------------+--------+-----------+---------+------------|
| element    | word % | word rank | earth % | earth rank |
|------------+--------+-----------+---------+------------|
| lead       |  15.50 |         1 |   0.001 |         36 |
| gold       |  11.64 |         2 |   0.000 |         75 |
| iron       |  11.14 |         3 |   5.612 |          4 |
| silver     |   7.38 |         4 |   0.000 |         68 |
| carbon     |   5.15 |         5 |   0.012 |         17 |
| oxygen     |   5.13 |         6 |  45.956 |          1 |
| copper     |   4.61 |         7 |   0.006 |         26 |
| hydrogen   |   3.51 |         8 |   0.139 |         10 |
| sodium     |   3.38 |         9 |   2.352 |          6 |
| calcium    |   2.84 |        10 |   4.137 |          5 |
| nitrogen   |   2.79 |        11 |   0.002 |         34 |
| mercury    |   2.22 |        12 |   0.000 |         67 |
| tin        |   2.13 |        13 |   0.000 |         51 |
| potassium  |   1.94 |        14 |   2.083 |          8 |
| zinc       |   1.70 |        15 |   0.007 |         24 |
| silicon    |   1.12 |        16 |  28.112 |          2 |
| nickel     |   1.08 |        17 |   0.008 |         23 |
| phosphorus |   1.05 |        18 |   0.104 |         11 |
| magnesium  |   0.98 |        19 |   2.322 |          7 |
| sulfur     |   0.84 |        20 |   0.035 |         16 |
|------------+--------+-----------+---------+------------|

This is based on the Google book corpus summarized here. There’s some ambiguity; I imagine most used of “lead” are the verb and not the element name. Some portion of the uses of “iron” refer to a device for smoothing wrinkles out of clothes.

Word percentage is relative to the set of chemical element names. Earth percentage is relative to the Earth’s crust.

The percentages above have been truncated for presentation’ obviously the abundance of gold, silver, mercury, and tin is not zero, though it is when rounded to three decimal places. The full data for the first 111 elements is available here.

]]>
https://www.johndcook.com/blog/2020/06/20/element-frequency/feed/ 2
Convex function of diagonals and eigenvalues https://www.johndcook.com/blog/2020/06/18/convex-function-eigenvalues/ https://www.johndcook.com/blog/2020/06/18/convex-function-eigenvalues/#comments Thu, 18 Jun 2020 14:45:26 +0000 https://www.johndcook.com/blog/?p=56227 Sam Walters posted an elegant theorem on his Twitter account this morning. The theorem follows the pattern of an equality for linear functions generalizing to an inequality for convex functions. We’ll give a little background, state the theorem, and show an example application.

Let A be a real symmetric n×n matrix, or more generally a complex n×n Hermitian matrix, with entries aij. Note that the diagonal elements aii are real numbers even if some of the other entries are complex. (A Hermitian matrix equals its conjugate transpose, which means the elements on the diagonal equal their own conjugate.)

A general theorem says that A has n eigenvalues. Denote these eigenvalues λ1, λ2, …, λn.

It is well known that the sum of the diagonal elements of A equals the sum of its eigenvalues.

\sum_{i=1}^n a_{ii} = \sum_{i=1}^n \lambda_i

We could trivially generalize this to say that for any linear function φ: RR,

\sum_{i=1}^n \varphi(a_{ii}) = \sum_{i=1}^n \varphi({\lambda_i})

because we could pull any shifting and scaling constants out of the sum.

The theorem Sam Walters posted says that the equality above extends to an inequality if φ is convex.

\sum_{i=1}^n \varphi(a_{ii}) \leq \sum_{i=1}^n \varphi({\lambda_i})

Here’s an application of this theorem. Assume the eigenvalues of A are all positive and let φ(x) = – log(x). Then φ is convex, and

-\sum_{i=1}^n \log(a_{ii}) \leq -\sum_{i=1}^n \log({\lambda_i})

and so

\prod_{i=1}^n a_{ii} \geq \prod_{i=1}^n \lambda_i = \det(A)

i.e. the product of the diagonals of A is an upper bound on the determinant of A.

This post illustrates two general principles:

  1. Linear equalities often generalize to convex inequalities.
  2. When you hear a new theorem about convex functions, see what it says about exp or -log.

More linear algebra posts

]]>
https://www.johndcook.com/blog/2020/06/18/convex-function-eigenvalues/feed/ 6
Bit flipping to primes https://www.johndcook.com/blog/2020/06/18/bit-flipping-to-primes/ https://www.johndcook.com/blog/2020/06/18/bit-flipping-to-primes/#comments Thu, 18 Jun 2020 12:33:42 +0000 https://www.johndcook.com/blog/?p=56224 Someone asked an interesting question on MathOverflow: given an odd number, can you always flip a bit in its binary representation to make it prime?

It turns out the answer is no, but apparently it is very often the case an odd number is just a bit flip away from being prime. I find that surprising.

Someone pointed out that 2131099 is not a bit flip away from a prime, and that this may be the smallest example [1]. The counterexample 2131099 is itself prime, so you could ask whether an odd number is either a prime or a bit flip away from a prime. Is this always the case? If not, is it often the case?

The MathOverflow question was stated in terms of Hamming distance, counting the number of bits in which two bit sequences differ. It asked whether odd numbers are always Hamming distance 1 away from a prime. My restatement of the question asks whether the Hamming distance is always at most 1, or how often it is no more than 1.

You could ask more generally about the Hamming distance to the nearest prime. Is it bounded, if not by 1, then by another finite number? If so, what is the smallest such bound? What is the probability that its value is 1? Etc.

This ties into a couple of other things I’ve blogged about. A few weeks ago I wrote about new work on the problem of finding the proportion of odd numbers that can be written as the sum of a power of 2 and a prime. That’s a little different problem since bit flipping is taking the XOR (exclusive or) and not always the same as addition. It also leaves out the possibility of flipping a bit beyond the most significant bit of the number, i.e. adding to a number n a power of 2 greater than n.

Another related post is on the Rowhammer attack on public key cryptography. By flipping a bit in the product of two primes, you can produce a number which is much easier to factor.

These two posts suggest a variation on the original problem where we disallow flipping bits higher than the most significant bit of n. So giving a k-bit number n, how often can we flip one of its k bits and produce a prime?

[1] Note that the bit flipped may be higher than the most significant bit of the number, unless ruled out as in the paragraph above. Several people have asked “What about 85?” It is true that flipping any of the seven lowest bits of 85 will not yield a prime. But flipping a zero bit in a more significant position will give a prime. For example, 1024 + 85 is prime. Bur for 2131099 it is not possible to add any larger power of 2 to the number and produce a prime.

]]>
https://www.johndcook.com/blog/2020/06/18/bit-flipping-to-primes/feed/ 7
The shape of beams and bulkheads https://www.johndcook.com/blog/2020/06/17/beams-and-bulkheads/ https://www.johndcook.com/blog/2020/06/17/beams-and-bulkheads/#respond Wed, 17 Jun 2020 12:33:32 +0000 https://www.johndcook.com/blog/?p=56148 After finding the NASA publication I mentioned in my previous post, I poked around a while longer in the NASA Technical Reports Server and found a few curiosities. One was that at one time NASA was interested in shapes that similar to the superellipses and squircles I’ve written about before.

A report [1] that I stumbled on was concerned with shapes with boundary described by

\left| \frac{x}{A} \right|^\alpha + \left| \frac{y}{B} \right|^\beta = 1

The superellipse corresponds to α = β = 2.5, and the squircle corresponds to α = β = 4 (or so), but the report was interested in the more general case in which α and β could be different.

By changing α and β separately we can let the curvature of the sides vary separately. Here are a couple examples. Both use A = 0.5, B = 0.8, and β = 1.8. The first uses α = 3.5

and the second creates a straighter line on the vertical sides by using α = 6.

So why was NASA interested in these shapes? According to [1], “The primary objective of the current research has been the optimun [sic] design of structural shapes” subject to the equation above and its three dimensional analog.

In order to provide material useful to the space program, it was decided to initiate the research with a determination of the geometrical and inertial properties of the above classes of shells. This was followed with a study of shells of revolution which were optimized with respect to maximum enclosed volume and minimum weight. A study on the vibration of beams was also reported in which the beam cross-section was defined by (1). Since bulkheads for bodies of type (2) require plate shapes of type (1), investigation was continued on clamped plates defined by (1).

Here (1) refers to the equation above and (2) refers to its 3-D version. The goal was to optimize various objectives over a family of shapes that was flexible but still easy enough to work with mathematically. The report [1] is concerned with computing conformal maps of the disk into these shapes in order to make it easier to solve equations defined over regions of that shape.

***

[1] The conformal mapping of the interior of the unit circle onto the interior of a class of smooth curves. Thomas F. Moriarty and Will J. Worley. NASA Contractor Report CR-1357. May 1969.

]]>
https://www.johndcook.com/blog/2020/06/17/beams-and-bulkheads/feed/ 0
NASA’s favorite ODE solver https://www.johndcook.com/blog/2020/06/16/nasas-favorite-ode-solver/ https://www.johndcook.com/blog/2020/06/16/nasas-favorite-ode-solver/#comments Tue, 16 Jun 2020 22:19:43 +0000 https://www.johndcook.com/blog/?p=56131 NASA’s Orbital Flight Handbook, published in 1963, is a treasure trove of technical information, including a section comparing the strengths and weaknesses of several numerical methods for solving differential equations.

The winner was a predictor-corrector scheme known as Gauss-Jackson, a method I have not heard of outside of orbital mechanics, but one apparently particularly well suited to NASA’s needs.

The Gauss-Jackson second-sum method is strongly recommended for use in either Encke or Cowell [approaches to orbit modeling]. For comparable accuracy, it will allow step-sizes larger by factors of four or more than any of the forth order methods. … As compared with unsummed methods of comparable accuracy, the Gauss-Jackson method has the very important advantage that roundoff error growth is inhibited. … The Gauss-Jackson method is particularly suitable on orbits where infrequent changes in the step-size are necessary.

Here is a table summarizing the characteristics of each of the solvers.

Notice that Gauss-Jackson is the only method whose roundoff error accumulation is described as “excellent.”

A paper from 2004 [1] implies that the Gauss-Jackson method was still in use at NASA at the time of writing.

The Gauss-Jackson multi-step predictor-corrector method is widely used in numerical integration problems for astrodynamics and dynamical astronomy. The U.S. space surveillance centers have used an eighth-order Gauss-Jackson algorithm since the 1960s.

I could imagine a young hotshot explaining to NASA why they should use some other ODE solver, only to be told that the agency had already evaluated the alternatives half a century ago, and that the competitors didn’t have the same long-term accuracy.

More math and space posts

[1] Matthew M. Berry and Liam M. Healy. Implementation of the Gauss-Jackson Integration for Orbit Propagation. The Journal of the Astronautical Sciences, Vol 52, No 3, July-September 2004, pp. 311–357.

]]>
https://www.johndcook.com/blog/2020/06/16/nasas-favorite-ode-solver/feed/ 2
Hohmann transfer orbit https://www.johndcook.com/blog/2020/06/15/hohmann-transfer-orbit/ https://www.johndcook.com/blog/2020/06/15/hohmann-transfer-orbit/#comments Mon, 15 Jun 2020 13:29:41 +0000 https://www.johndcook.com/blog/?p=55959 How does a spacecraft orbiting a planet move from one circular orbit to another? It can’t just change lanes like a car going around a racetrack because speed and altitude cannot be changed independently.

The most energy-efficient way to move between circular orbits is the Hohmann transfer orbit [1]. The Hohmann orbit is an idealization, but it approximates maneuvers actually done in practice.

The Hohmann transfer requires applying thrust twice: once to leave the first circular orbit into the elliptical orbit, and once again to leave the elliptical orbit for the new circular orbit.

Hohmann transfer orbit

Suppose we’re in the orbit represented by the inner blue circle above and we want to move to the outer green circle. We apply our first instantaneous burst of thrust, indicated by the inner ×, and that puts us into the orange elliptical orbit.

(We can’t move faster in our current orbit without continually applying trust because velocity determines altitude. The new orbit will pass through the point at which we applied the thrust, and so our new orbit cannot be a circle because distinct concentric circles don’t intersect.)

The point at which we first apply thrust will be the point of the new orbit closest the planet, the point with maximum kinetic energy. The point furthest from the planet, the point with maximum potential energy, will occur 180° later on the opposite side. The first burst of thrust is calculated so that the maximum altitude of the resulting elliptical orbit is the desired altitude of the new circular orbit.

Once the elliptical orbit is at its maximum distance from the planet, marked by the outer ×, we apply the second thrust.  The amount of thrust is whatever it needs to be in order to maintain a circular orbit at the new altitude. The second half of the elliptical orbit, indicated by the dashed orange curve, is not taken; it’s only drawn to show the orbit we would stay on if we didn’t apply the second thrust.

So in summary, we use one burst of thrust to enter an elliptic orbit, and one more burst of thrust to leave that elliptical orbit for the new circular orbit. There are ways to move between circular orbits more quickly, but they require more fuel.

The same principles work in reverse, and so you could also use a Hohmann transfer to descend from a higher orbit to a lower one. You would apply your thrust opposite direction of motion.

There are several idealizations to the Hohmann transfer orbit. The model assume orbits are planar, that the initial orbit and the final orbit are circular, and that the two burns are each instantaneous.

The Hohmann transfer also assumes that the mass of the spacecraft is negligible compared to the planet. This would apply, for example, to a typical communication satellite, but perhaps not to a Death Star.

More orbital mechanics posts

[1] If you’re moving from one orbit to another at 12 times the radius, then the bi-elliptic orbit maneuver would use less fuel. Instead of taking half of an elliptical orbit to make the transfer, it fires thrusters three times, using half each of two different elliptical orbits to reach the desired circular orbit.

]]>
https://www.johndcook.com/blog/2020/06/15/hohmann-transfer-orbit/feed/ 1
Change of basis and Stirling numbers https://www.johndcook.com/blog/2020/06/14/change-of-basis/ https://www.johndcook.com/blog/2020/06/14/change-of-basis/#comments Sun, 14 Jun 2020 19:12:47 +0000 https://www.johndcook.com/blog/?p=56032 Polynomials form a vector space—the sum of two polynomials is a polynomial etc.—and the most natural basis for this vector space is powers of x:

1, x, x², x³, …

But the power basis is not the only possible basis, and often not the most useful basis in application.

Falling powers

In some applications the falling powers of x are a more useful basis. For positive integers n, the nth falling power of x is defined to be

x^{\underbar{\small{\emph{n}}}) = x(x-1)(x-2)\cdots(x-n+1)

Falling powers come up in combinatorics, in the calculus of finite differences, and in hypergeometric functions.

Change of basis

Since we have two bases for the vector space of polynomials, we can ask about the matrices that represent the change of basis from one to the other, and here’s where we see an interesting connection.

The entries of these matrices are numbers that come up in other applications, namely the Stirling numbers. You can think of Stirling numbers as variations on binomial coefficients. More on Stirling numbers here.

In summation notation, we have

\begin{align*} x^{\underbar{\small{\emph{n}}}} &= \sum_{k=0}^n S_1(n,k)x^{\text{\small{\emph{k}}}} \\ x^{\text{\small{\emph{n}}}} &= \sum_{k=0}^n S_2(n,k)x^{\underbar{\small\emph{n}}} \\ \end{align*}

where the S1 are the (signed) Stirling numbers of the 1st kind, and the S2 are the Stirling numbers of the 2nd kind.

(There are two conventions for defining Stirling numbers of the 1st kind, differing by a factor of (-1)n-k.)

Matrix form

This means the (ij)th element of matrix representing the change of basis from the power basis to the falling power basis is S1(i, j) and the (i, j)th entry of the matrix for the opposite change of basis is S2(i, j). These are lower triangular matrices because S1(i, j) and S2(i, j) are zero for j > i.

These are infinite matrices since there’s no limit to the degree of a polynomial. But if we limit our attention to polynomials of degree less than m, we take the upper left m by m submatrix of the infinite matrix. For example, if we look at polynomials of degree 4 or less, we have

\begin{bmatrix} x^\underbar{\tiny{0}} \\ x^\underbar{\tiny{1}} \\ x^\underbar{\tiny{2}} \\ x^\underbar{\tiny{3}} \\ x^\underbar{\tiny{4}} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & -1 & 1 & 0 & 0 \\ 0 & 2 & -3 & 1 & 0 \\ 0 & -6 & 11 & -6 & 1 \\ \end{bmatrix} \begin{bmatrix} x^\text{\tiny{0}} \\ x^\text{\tiny{1}} \\ x^\text{\tiny{2}} \\ x^\text{\tiny{3}} \\ x^\text{\tiny{4}} \\ \end{bmatrix}

to convert from powers to falling powers, and

\begin{bmatrix} x^\text{\tiny{0}} \\ x^\text{\tiny{1}} \\ x^\text{\tiny{2}} \\ x^\text{\tiny{3}} \\ x^\text{\tiny{4}} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 \\ 0 & 1 & 3 & 1 & 0 \\ 0 & 1 & 7 & 6 & 1 \\ \end{bmatrix} \begin{bmatrix} x^\underbar{\tiny{0}} \\ x^\underbar{\tiny{1}} \\ x^\underbar{\tiny{2}} \\ x^\underbar{\tiny{3}} \\ x^\underbar{\tiny{4}} \\ \end{bmatrix}

going from falling powers to powers.

Incidentally, if we filled a matrix with unsigned Stirling numbers of the 1st kind, we would have the change of basis matrix going from the power basis to rising powers defined by

x^{\overline{n}} = x(x+1)(x+2)\cdots(x+n+1)

It may be hard to see, but there’s a bar on top of the exponent n for rising powers whereas before we had a bar under the n for falling powers.

Related posts

]]>
https://www.johndcook.com/blog/2020/06/14/change-of-basis/feed/ 3
ODE solver landscape https://www.johndcook.com/blog/2020/06/12/ode-solver-landscape/ https://www.johndcook.com/blog/2020/06/12/ode-solver-landscape/#respond Fri, 12 Jun 2020 23:19:04 +0000 https://www.johndcook.com/blog/?p=55951 Many methods for numerically solving ordinary differential equations are either Runge-Kutta methods or linear multistep methods. These methods can either be explicit or implicit.

The table below shows the four combinations of these categories and gives some examples of each.

\begin{tabular}{|l|ll|} \hline  & Runge-Kutta & Linear multistep\\ \hline Explicit & ERK & Adams-Bashforth\\ Implicit & (S)DIRK & Adams-Moulton, BDF\\ \hline \end{tabular}

Runge-Kutta methods advance the solution of a differential equation one step at a time. That is, these methods approximate the solution at the next time step using only the solution at the current time step and the differential equation itself.

Linear multistep methods approximate the solution at the next time step using the computed solutions at the latest several time steps.

Explicit methods express the solution at the next time step as an explicit function of other information, not including the solution itself. The solution at the next time step appears on only one side of the equations.

Implicit methods express the solution at the next time step as a function of other information including the solution itself. The solution at the next time step appears on both sides of the equations. Additional work needs to be done to solve for the solution.

More on explicit vs implicit methods here.

In the table above, ERK stands for, not surprisingly, explicit Runge-Kutta methods. DIRK stands for diagonally implicit Runge-Kutta. SDIRK stands for singly diagonally implicit Runge-Kutta. BDF stands for backward difference formulas.

More posts on ODE solvers

]]>
https://www.johndcook.com/blog/2020/06/12/ode-solver-landscape/feed/ 0
New math for going to the moon https://www.johndcook.com/blog/2020/06/12/new-math-for-going-to-the-moon/ https://www.johndcook.com/blog/2020/06/12/new-math-for-going-to-the-moon/#comments Fri, 12 Jun 2020 18:51:12 +0000 https://www.johndcook.com/blog/?p=55927 spacecraft rendezvous

Before I went to college, I’d heard that it took new math and science for Apollo to get to the moon. Then in college I picked up the idea that Apollo required a lot of engineering, but not really any new math or science. Now I’ve come full circle and have some appreciation for the math research that was required for the Apollo landings.

Celestial mechanics had been studied long before the Space Age, but that doesn’t mean the subject was complete. According to One Giant Leap,

In the weeks after Sputnik, one Langley [Research Center] scientist went looking for books on orbital mechanics—how to fly in space—and in the Langley technical library he found exactly one: Forest R. Moulton’s [1] An Introduction to Celestial Mechanics. In 1958 Langley was in possession of one of the most recent editions of Moulton: the 1914 update of the 1902 edition.

I have a quibble with part of the quote above. The author describes orbital mechanics as “how to fly in space.” More technically, at the time, orbital mechanics was “how things fly through space.” Orbital mechanics was passive. You wanted to know how, for example, Titan moves around Saturn. Nobody asked about the most efficient way to change the orbit of Titan so that it ends up at a certain place at a certain time.

NASA needed active orbital mechanics. It had to do more than simply describe existing orbits; it had to design orbits. And it had to control orbits. None of the terms in your equations are known to infinite precision, so it is not enough to understand the exact equations under ideal circumstances. You have to understand how uncertainties in the parts impact the whole, and how to adjust for them.

And all this has to be done in a computer with about 500 kilobits of ROM [2]. Because the computer memory was limited, NASA had to know which terms in the equations could be dropped, what approximations could be made, etc. Understanding how to approximate a system well with limited resources is much harder than working with exact equations [3].

Nobody at NASA would have said “We’ve got the math in the bag. Now we just need the engineers to get busy.”

Related posts

[1] This is the same Moulton of Adams-Moulton and Adams-Bashforth-Moulton numerical methods for solving differential equations. Presumably Mr. Moulton’s interest in numerical solutions to differential equations came out of his interest in celestial mechanics. See where Adams-Moulton fits into the ODE solver landscape in the next post.

[2] Each word in the Apollo Guidance Computer was 15 bits of data plus one check bit. There were 2048 words of RAM, 36,864 words of ROM. This amounts to 552,960 bits of ROM, excluding check bits, as much as 68 kilobytes using 8-bit bytes.

[3] Not that the “exact” equations are actually exact. When you write down the equations of motion for three point masses, for example, you’ve already done a great deal of simplification.

]]>
https://www.johndcook.com/blog/2020/06/12/new-math-for-going-to-the-moon/feed/ 1
The bucket that can’t hold enough paint to paint itself https://www.johndcook.com/blog/2020/06/11/gabriels-horn/ https://www.johndcook.com/blog/2020/06/11/gabriels-horn/#comments Thu, 11 Jun 2020 16:17:42 +0000 https://www.johndcook.com/blog/?p=55880 Gabriel's horn

Gabriel’s horn is the surface created by rotating 1/x around the x-axis. It is often introduced in calculus classes as an example of a surface with finite volume and infinite surface area. If it were a paint can, it could not hold enough paint to paint itself!

This post will do two things:

  1. explain why the paradox works, and
  2. explain why it’s not paradoxical after all.

Rather than working out the surface area and volume exactly as one would do in a calculus class, we’ll be a little less formal but also more general.

Original function

When you set up the integral to compute the volume of the solid bounded by rotating the graph of a function f, the integrand is proportional to the square of f. So rotating the graph of 1/x gives us an integral whose integrand is proportional to 1/x² and the integral converges.

When you set up the integral to compute the surface area, the integrand is proportional to f itself, not its square. So the integrand is proportional to 1/x and diverges.

Generalization

For the volume to be finite, all we need is that f is O(1/x), i.e. eventually bounded above by some multiple of 1/x, and in fact we could get by with less.

For the area to be infinite, it is sufficient for the function to be Ω(1/x), i.e. eventually bounded below by some multiple of 1/x. An as before, we could get by with less.

So to make another example like Gabriel’s horn, we could use any function in Θ(1/x), i.e. eventually bounded above and below by some multiple of 1/x. So we could, for example, use

f(x) = (x + cos²x) / (x² + 42)

If you’re unfamiliar with the notation here, see these notes on big-O and related notation.

Resolution

Now back to the idea of filling Gabriel’s horn with paint. If we spread the paint on the outside of the can with any constant thickness, we can only cover a finite area, but the area is infinite, so we can’t paint the whole thing.

The resolution to the paradox is that we’re requiring the paint to be more realistic than the can. We’re implicitly letting the material of our can become thinner and thinner without any limit to how thin it could be. If we also let our paint spread thinner and thinner as well at the right rate, we could cover the can with a coat of paint.

]]>
https://www.johndcook.com/blog/2020/06/11/gabriels-horn/feed/ 2
Where does the seven come from? https://www.johndcook.com/blog/2020/06/10/where-does-the-seven-come-from/ https://www.johndcook.com/blog/2020/06/10/where-does-the-seven-come-from/#comments Wed, 10 Jun 2020 21:53:58 +0000 https://www.johndcook.com/blog/?p=55850 Here’s a plot of exp(6it)/2 + exp(20it)/3:

Notice that the plot has 7-fold symmetry. You might expect 6-fold symmetry from looking at the equation. Where did the 7 come from?

I produced the plot using the code from this post, changing the line defining the function to plot to

    def f(t):
        return exp(6j*t)/2 + exp(20j*t)/3

You can find the solution in Eliot’s comment in this Twitter thread.

Related links

]]>
https://www.johndcook.com/blog/2020/06/10/where-does-the-seven-come-from/feed/ 2
Gibbs phenomenon https://www.johndcook.com/blog/2020/06/10/gibbs-phenomenon/ https://www.johndcook.com/blog/2020/06/10/gibbs-phenomenon/#comments Wed, 10 Jun 2020 12:26:59 +0000 https://www.johndcook.com/blog/?p=55766 I realized recently that I’ve written about generalized Gibbs phenomenon, but I haven’t written about its original context of Fourier series. This post will rectify that.

The image below comes from a previous post illustrating Gibbs phenomenon for a Chebyshev approximation to a step function.

Gibbs phenomena for Chebyshev interpolation

Although Gibbs phenomena comes up in many different kinds of approximation, it was first observed in Fourier series, and not by Gibbs [1]. This post will concentrate on Fourier series, and will give an example to correct some wrong conclusions one might draw about Gibbs phenomenon from the most commonly given examples.

The uniform limit of continuous function is continuous, and so the Fourier series of a function cannot converge uniformly where the function is discontinuous. But what does the Fourier series do near a discontinuity?

It’s easier to say what the Fourier series does exactly at a discontinuity. If a function is piecewise continuous, then the Fourier series at a jump discontinuity converges to the average of the limits from the left and from the right at that point.

What the Fourier series does on either side of the discontinuity is more interesting. You can see high-frequency oscillations on either side. The series will overshoot on the high side of the jump and undershoot on the low side of the jump.

The amount of overshoot and undershoot is proportional to the size of the gap, about 9% of the gap. The exact proportion, in the limit, is given by the Wilbraham-Gibbs constant

\frac{1}{\pi} \int_0^\pi \frac{\sin t}{t} \, dt - \frac{1}{2} = 0.0894898\ldots

Gibbs phenomenon is usually demonstrated with examples that have a single discontinuity at the end of their period, such as a square wave or a saw tooth wave. But Gibbs phenomenon occurs at every discontinuity, wherever located, no matter how many there are.

The following example illustrates everything we’ve talked about above. We start with the function f plotted below on [-π, π] and imagine it extended periodically.

Notice three things about f:

  1. It is continuous at the point where it repeats since it equals 0 at -π and π.
  2. It has two discontinuities inside [-π, π].
  3. One of the discontinuities is larger than the other.

The following plot shows the sum of the first 100 terms in the Fourier series for f plotted over [-2π, 2π].

Notice three things about this plot that correspond to the three observations about the function we started with:

  1. There nothing remarkable about the series at -π and π.
  2. You can see Gibbs phenomenon at the discontinuities of f.
  3. The overshoot and undershoot are larger at the larger discontinuity.

Related to the first point above, note that the derivative of f is discontinuous at the period boundary. A discontinuity in the derivative does not cause Gibbs phenomena.

Here’s a close-up plot that shows the wiggling near the discontinuities.

Gibbs phenomena for other series

[1] Henry Wilbraham first described what Josiah Gibbs discovered independently 50 years later, what we now call Gibbs phenomenon. This is an example of Stigler’s law of eponymy.

]]>
https://www.johndcook.com/blog/2020/06/10/gibbs-phenomenon/feed/ 1
Novel and extended floating point https://www.johndcook.com/blog/2020/06/10/novel-and-extended-floating-point/ https://www.johndcook.com/blog/2020/06/10/novel-and-extended-floating-point/#respond Wed, 10 Jun 2020 12:01:09 +0000 https://www.johndcook.com/blog/?p=55725 My first consulting project, right after I graduated college, was developing floating point algorithms for a microprocessor. It was fun work, coming up with ways to save a clock cycle or two, save a register, get an extra bit of precision. But nobody does that kind of work anymore. Or do they?

There is still demand for novel floating point work. Or maybe I should say there is once again demand for such work.

Companies are interested in low-precision arithmetic. They may want to save memory, and are willing to trade precision for memory. With deep neural networks, for example, quantity is more important than quality. That is, there are many weights to learn but the individual weights do not need to be very precise.

And while some clients want low-precision, others want extra precision. I’m usually skeptical when someone tells me they need extended precision because typically they just need a better algorithm. And yet some clients do have a practical need for extended precision.

Some clients don’t aren’t primarily interested precision, but they’re interested in ways to reduce energy consumption. They’re more concerned with watts than clock cycles or ulps. I imagine this will become more common.

For a while it seemed that 64-bit IEEE floating point numbers had conquered the world. Now I’m seeing more interest in smaller and larger formats, and simply different formats. New formats require new math algorithms, and that’s where I’ve helped clients.

If you’d like to discuss a novel floating point project, let’s talk.

More floating point posts

]]>
https://www.johndcook.com/blog/2020/06/10/novel-and-extended-floating-point/feed/ 0