The other butterfly effect

monarch butterfly

The original butterfly effect

The butterfly effect is the semi-serious claim that a butterfly flapping its wings can cause a tornado half way around the world. It’s a poetic way of saying that some systems show sensitive dependence on initial conditions, that the slightest change now can make an enormous difference later. Often this comes up in the context of nonlinear, chaotic systems but it isn’t limited to that. I give an example here of a linear differential equation whose solutions start out the essentially the same but eventually diverge completely.

Once you think about these things for a while, you start to see nonlinearity and potential butterfly effects everywhere. There are tipping points everywhere waiting to be tipped!

The other butterfly effect

But a butterfly flapping its wings usually has no effect, even in sensitive or chaotic systems. You might even say especially in sensitive or chaotic systems.

Sensitive systems are not always and everywhere sensitive to everything. They are sensitive in particular ways under particular circumstances, and can otherwise be quite resistant to influence. Not only may a system be insensitive to butterflies, it may even be relatively insensitive to raging bulls. The raging bull may have little more long-term effect than a butterfly. This is what I’m calling the other butterfly effect.

Steering complex systems

In some ways chaotic systems are less sensitive to change than other systems. And this can be a good thing. Resistance to control also means resistance to unwanted tampering. Chaotic or stochastic systems can have a sort of self-healing property. They are more stable than more orderly systems, though in a different way.

The lesson that many people draw from their first exposure to complex systems is that there are high leverage points, if only you can find them and manipulate them. They want to insert a butterfly to at just the right time and place to bring about a desired outcome. Instead, we should humbly evaluate to what extent it is possible to steer complex systems at all. We should evaluate what aspects can be steered and how well they can be steered. The most effective intervention may not come from tweaking the inputs but from changing the structure of the system.

Related posts

Linear regression and planet spacing

A while back I wrote about how planets are evenly spaced on a log scale. I made a bunch of plots, based on our solar system and the extrasolar systems with the most planets, and said noted that they’re all roughly straight lines. Here’s the plot for our solar system, including dwarf planets, with distance on a logarithmic scale.

Distances in our solar system including planets and dwarf planets

This post is a quick follow up to that one. You can quantify how straight the lines are by using linear regression and comparing the actual spacing with the spacing given by the best straight line. Here I’m regressing the log of the distance of each planet from its star on the planet’s ordinal position.

NB: I am only using regression output as a measure of goodness of fit. I am not interpreting anything as a probability.

|-----------+-----------+----------+-----------|
| System    | Adjusted  | Slope    | Intercept |
|           | R-squared | p-value  | p-value   |
|-----------+-----------+----------+-----------|
| home      |    0.9943 | 1.29e-11 |  2.84e-08 |
| kepler90  |    0.9571 | 1.58e-05 |  1.29e-06 |
| hd10180   |    0.9655 | 1.41e-06 |  2.03e-07 |
| hr8832    |    0.9444 | 1.60e-04 |  5.57e-05 |
| trappist1 |    0.9932 | 8.30e-07 |  1.09e-09 |
| kepler11  |    0.9530 | 5.38e-04 |  2.00e-05 |
| hd40307   |    0.9691 | 2.30e-04 |  1.77e-05 |
| kepler20  |    0.9939 | 8.83e-06 |  3.36e-07 |
| hd34445   |    0.9679 | 2.50e-04 |  4.64e-04 |
|-----------+-----------+----------+-----------|

R² is typically interpreted as how much of the variation in the data is explained by the model. In the table above, the smallest value of R² is 94%.

p-values are commonly, and wrongly, understood to be the probability of a model assumption being incorrect. As I said above, I’m completely avoiding any interpretation of p-values as the probability of anything, only noting that small values are consistent with a good fit.

Journals commonly, and wrongly, are willing to assume that anything with a p-value less than 0.05 is probably true. Some are saying the cutoff should be 0.005. There are problems with using any p-value cutoff, but I don’t want to get into here. I’m only saying that small p-values are typically seen as evidence that a model fits, and the values above are orders of magnitude smaller than what journals consider acceptable evidence.

When I posted my article about planet spacing I got some heated feedback saying that this isn’t exact, that it’s unscientific, etc. I thought that was strange. I never said it was exact, only that it was a rough pattern. And although it’s not exact, it would be hard to find empirical studies of anything with such a good fit. If you held economics or psychology, for example, to the same standards of evidence, there wouldn’t be much left.

This pattern is known as the Titius-Bode law. I stumbled on it by making some plots. I assumed from the beginning that someone else must have done the same exercise and that the pattern had a name, but I didn’t know that name until later.

Someone sent me a paper that analyzes the data on extrasolar planets and Bode’s law, something much more sophisticated than the crude sketch above, but unfortunately I can’t find it this morning. I don’t recall what they did. Maybe they fit a hierarchical model where each system has its own slope and intercept.

One criticism has been that by regressing against planet order, you automatically get a monotone function. That’s true, but you do get a much better fit on a log scale than on a linear scale in any case. You might look at just the relative planet spacings without reference to order.

Proving life exists on Earth

NASA’s Galileo mission was primarily designed to explore Jupiter and its moons. In 1989, the Galileo probe started out traveling away from Jupiter in order to do a gravity assist swing around Venus. About a year later it also did a gravity assist maneuver around Earth. Carl Sagan suggested that when passing Earth, the Galileo probe should turn its sensors on Earth to look for signs of life. [1]

Artist conception of Galileo probe surveying Jupiter and its moons

Now obviously we know there’s life on Earth. But if we’re going look for life on other planets, it’s reasonable to ask that our methods return positive results when examining the one planet we know for sure does host life. So scientists looked at the data from Galileo as if it were coming from another planet to see what patterns in the data might indicate life.

I’ve started using looking for life on Earth as a metaphor. I’m working on a project right now where I’m looking for a needle in a haystack, or rather another needle in a haystack: I knew that one needle existed before I got started. So I want to make sure that my search procedure at least finds the one positive result I already know exists. I explained to a colleague that we need to make sure we can at least find life on Earth.

This reminds me of simulation work. You make up a scenario and treat it as the state of nature. Then pretend you don’t know that state, and see how successful your method is at discovering that state. It’s sort of a schizophrenic way of thinking, pretending that half of your brain doesn’t know what the other half is doing.

It also reminds me of software testing. The most trivial tests can be surprisingly effective at finding bugs. So you write a few tests to confirm that there’s life on Earth.

Related posts

[1] I found out about Galileo’s Earth reconnaissance listening to the latest episode of the .NET Rocks! podcast.

Ellipsoid geometry and Haumea

To first approximation, Earth is a sphere. A more accurate description is that the earth is an oblate spheroid, the polar axis being a little shorter than the equatorial diameter. See details here. Other planets are also oblate spheroids as well. Jupiter is further from spherical than the earth is more oblate.

The general equation for the surface of an ellipsoid is

frac{x^2}{a^2} + frac{y^2}{b^2} + frac{z^2}{c^2} = 1

An ellipsoid has three semi-axes: ab, and c. For a sphere, all three are equal to the radius. For a spheroid, two are equal. In an oblate spheroid, like Earth and Jupiter, abc. For a prolate spheroid like Saturn’s moon Enceladus, abc.

Artists conception of Haumea

The dwarf planet Haumea is far from spherical. It’s not even a spheroid. It’s a general (tri-axial) ellipsoid, with a, b, and c being quite distinct. It’s three semi-axes are approximately 1000, 750, and 500 kilometers. The dimensions are not known very accurately. The image above is an artists conception of Haumea and its ring, via Astronomy Picture of the Day.

This post explains how to compute the volume and surface area of an ellipsoid. See that post for details. Here we’ll just copy the Python code from that post.

    from math import sin, cos, acos, sqrt, pi
    from scipy.special import ellipkinc, ellipeinc
    
    def volume(a, b, c):
        return 4*pi*a*b*c/3
    
    def area(a, b, c):
        phi = acos(c/a)
        k = a*sqrt(b**2 - c**2)/(b*sqrt(a**2 - c**2))
        E = ellipeinc(phi, k)
        F = ellipkinc(phi, k)
        elliptic = E*sin(phi)**2 + F*cos(phi)**2
        return 2.0*pi*c**2 + 2*pi*a*b*elliptic/sin(phi)
    
    a, b, c = 1000, 750, 500
    
    print(volume(a, b, c))
    print(area(a, b, c))

Related post: Planets evenly spaced on a log scale

Asymmetric surprise

Motivating example: planet spacing

My previous post showed that planets are roughly evenly distributed on a log scale, not just in our solar system but also in extrasolar planetary systems. I hadn’t seen this before I stumbled on it by making some plots.

I didn’t think it was an original discovery—I assume someone did this exercise immediately when systems with several planets were discovered—but I didn’t know what this observation was called. I now know it’s known as the Titius-Bode law, a generalization of an observation about our solar system by Messrs. Titius and Bode a couple centuries ago. See, for example, [1].

Several people were skeptical of the claim that planets are distributed according to a power law and pointed out that uniformly distributed points can look fairly evenly distributed on a logarithmic scale. Which is true, and gets to the topic I want to discuss in this post. Planets are not spaced like uniform random samples (see [1]) and yet it reasonable, at first glance, to ask whether they are.

Asymmetric surprise

If you’re expecting a power law, and you’re given uniformly distributed data, it doesn’t look too surprising. On the other hand, if you’re expecting uniformly distributed data and you see data distributed according to a power law, you are surprised. I’ll formalize this below.

If you’ve ever tried to make a scaled model of our solar system, you were probably surprised that the planets are far from uniformly spaced. A scaled model of our solar system, say at a museum, is likely to position a few of the inner planets to scale, and then use text to explain where the outer planets should be. For example, there may be a footnote saying “And if everything were to scale, Pluto would be behind the Exxon station at the end of the street.” This is an example of implicitly expected a uniform distribution and receiving data distributed according to a power law.

Some people suspected that I was doing the opposite. By plotting distances on a log scale, I’m implicitly expected a power law distribution. Maybe the data were roughly uniform, but I fooled myself into seeing a power law.

Quantifying surprise

The Kullback-Liebler divergence from Y to X, written KL(X || Y), is the average surprise of seeing Y when you expected X. That’s one of the interpretations. See this post for more interpretations.

In general, Kullback-Liebler divergence is not symmetric. The divergence from X to Y typically does not equal the divergence from Y to X. The discussion above claims that the surprise from seeing power law data when expecting a uniform distribution is greater than the surprise from seeing uniform data when expected a power law distribution. We show below that this is true.

Let X be random variable uniformly distributed on [0, 1] and let Y be a random variable with distribution proportional to xα on the same interval. (The proportionality constant necessary to make the probability integrate to 1 is α + 1.) We will show that KL(X || Y) is greater than KL(Y || X).

First we calculate the two divergences.

\begin{eqnarray*} \mathrm{KL}(X || Y) &=& - \int_0^1 f_X(x) \, \log\left(\frac{f_Y(x)}{f_X(x)} \right) \, dx \\ &=& -\int_0^1 1 \cdot \left( \log(\alpha+1) + \alpha \log x - \log 1 \right) \, dx \\ &=& \alpha - \log(\alpha+1) \end{eqnarray*}

and

\begin{eqnarray*} \mathrm{KL}(Y || X) &=& - \int_0^1 f_Y(x) \, \log\left(\frac{f_X(x)}{f_Y(x)} \right) \, dx \\ &=& -\int_0^1 (\alpha + 1)x^\alpha \left(\log 1 -\log(\alpha+1) - \alpha \log x \right) \, dx \\ &=& \log(\alpha+1) - \frac{\alpha}{1 + \alpha} \end{eqnarray*}

And here is a plot comparing the two results as a function of the exponent α.

Related posts

***

[1] Timothy Bovaird, Charles H. Lineweaver; Exoplanet predictions based on the generalized Titius–Bode relation, Monthly Notices of the Royal Astronomical Society, Volume 435, Issue 2, 21 October 2013, Pages 1126–1138, https://doi.org/10.1093/mnras/stt1357

Planets evenly spaced on log scale

The previous post was about Kepler’s observation that the planets were spaced out around the sun the same way that nested regular solids would be. Kepler only knew of six planets, which was very convenient because there are only five regular solids. In fact, Kepler thought there could only be six planets because there are only five regular solids.

The distances to each of the planets is roughly a geometric series. Ratios of consecutive distances are between 1.3 and 3.4. That means the distances should be fairly evenly spaced on a logarithmic scale. The plot below shows that this is the case.

Distance to planets in our solar system on log scale

The plot goes beyond the six planets known in Kepler’s day and adds four more: Uranus, Neptune, Pluto, and Eris. Here I’m counting the two largest Kuiper belt objects as planets. Distances are measured in astronomical units, i.e. Earth = 1.

Update: The fit is even better if we include Ceres, the largest object in the asteroid belt. It’s now called a dwarf planet, but it was considered a planet for the first fifty years after it was discovered.

Distances in our solar system including planets and dwarf planets

Extrasolar planets

Is this just true of our solar system, or is it true of other planetary systems as well? At the time of writing, we know of one planetary system outside our own that has 8 planets. Three other systems have around 7 planets; I say “around” because it depends on whether you include unconfirmed planets. The spacing between planets in these other systems is also fairly even on a logarithmic scale. Data on planet distances is taken from each system’s Wikipedia page. Distances are semimajor axes, not average distances.

Kepler-90

Kepler-90 is the only planetary system outside our own with eight confirmed planets that we know of.

Distances to planets in Kepler-90 system

HD 10180

HD 10180 has seven confirmed planets and two unconfirmed planets. The unconfirmed planets are included below, the 3rd and 6th objects.

Distances to planets in HD 10180 system

HR 8832

The next largest system is HR 8832 with five confirmed planets and two unconfirmed, numbers 5 and 6 below. It would work out well for this post if the 6th object were found to be a little closer to its star.

Distances to planets in HR 8832 system

TRAPPIST-1

TRAPPIST-1 is interesting because the planets are very close to their star, ranging from 0.01 to 0.06 AU. Also, this system follows an even logarithmic spacing more than the others.

Distances to planets in TRAPPIST-1 system

Systems with six planets

There are currently four known systems with four planets: Kepler-11, Kepler-20, HD 40307, and HD 34445. They also appear to have planets evenly spaced on a log scale.

Distances between planets in known systems with six planets

Like TRAPPIST-1, Kepler-20 has planets closer in and more evenly spaced (on a log scale).

Related posts

Planets and Platonic solids

Johann Kepler discovered in 1596 that the ratios of the orbits of the six planets known in his day were the same as the ratios between nested Platonic solids. Kepler was understandably quite impressed with this discovery and called it the Mysterium Cosmographicum.

Kepler's Mysterium Cosmographicum

I heard of this in a course in the history of astronomy long ago, and have had in the back of my mind that one day I’d look into this in detail. How exactly do you fit these regular solids together? How well do the planetary ratios match the regular solid ratios?

Imagine the orbit of each planet being the equator of a spherical shell centered at the sun. The five regular solids fit snugly into the spaces between the shells. Between Mercury and Venus you can insert an octahedron. Its inradius is the distance of Mercury to the sun, and its circumradius is the distance of Venus to the sun. You can fit the other regular solids in similarly, the icosahedron between Venus and Earth, the dodecahedron between Earth and Mars, the tetrahedron between Mars and Jupiter, and the hexahedron (cube) between Jupiter and Saturn.

Here’s the data on the inradius and circumradius of each regular solid taken from Mathworld.

|--------------+--------------+----------|
| solid        | circumradius | inradius |
|--------------+--------------+----------|
| octahedron   |      0.70722 |  0.40825 |
| icosahedron  |      0.95106 |  0.75576 |
| dodecahedron |      1.40126 |  1.11352 |
| tetrahedron  |      0.61237 |  0.20412 |
| hexahedron   |      0.86603 |  0.50000 |
|--------------+--------------+----------|

Here’s the data on average orbit radii measured in astronomical units taken from WolframAlpha.

|---------+----------|
| Planet  | Distance |
|---------+----------|
| Mercury |  0.39528 |
| Venus   |  0.72335 |
| Earth   |  1.00000 |
| Mars    |  1.53031 |
| Jupiter |  5.20946 |
| Saturn  |  9.55105 |
|---------+----------|

So how well does Kepler’s pattern hold? In the table below, “Planet ratio” is the radius ratios of Venus to Mercury, Earth to Venus, etc. “Solid ratio” is the circumradius to inradius ratio of the regular solids in the order given above.

|--------------+-------------|
| Planet ratio | Solid ratio |
|--------------+-------------|
|      1.82997 |     1.73205 |
|      1.38246 |     1.25841 |
|      1.53031 |     1.25841 |
|      3.40419 |     3.00000 |
|      1.83340 |     1.73205 |
|--------------+-------------|

Not a bad fit, but not great either. I’ve heard that the fit was better given the data available to Kepler at the time; if Kepler had had more accurate data, he might not have come up with his Mysterium Cosmographicum.

By the way, notice some repetition in the solid ratios. The implied equalities are exact. The icosahedron and dodecahedron have the same ratio of circumradius to inradius because they are dual polyhedra. The same is true for the cube and the octahedron. Also, the ratio of 3 for the tetrahedron is exact.

Update: What if Kepler had known about more planets? The next planet ratios in our table above would be 2.01, 1.57, and 1.35. None of these is close to any of the solid ratios.

Related posts

Gravitational attraction of stars and cows

cow

One attempt at rationalizing astrology is to say that the gravitational effects of celestial bodies impact our bodies. To get an idea how hard the stars and planets pull on us, let’s compare their gravitational attraction to that of cows at various distances.

Newton’s law of gravity says that the gravitational attraction between two objects is proportional to the product of their masses and inversely proportional to the square of the distance between them. From this we can solve for how far away a cow would have to be to have the same gravitational attraction.

d_{\text{cow}} = d_{\text{star}} \sqrt{ \frac{m_{\text{cow}}}{m_\text{star}} }

For round numbers, let’s say a cow weighs 1000 kg. (Strictly speaking, that’s a more typical weight for a bull than a cow.) Then Jupiter has as much gravitational tug on a person as does a cow about two feet away.

Regulus, the brightest star in Leo, has the same pull as a cow about 5 miles away. Alpha Centauri, the closest star to Earth (other than the sun, of course) has about the same pull as a cow 18 miles away.

Calculations were based on average distance. The distance to planets changes over time, but so does the distance to a given cow.

|----------------+--------------|
| Body           | Cow distance |
|----------------+--------------|
| Jupiter        |     0.57 m   |
| Venus          |     2.45 m   |
| Mars           |    10.06 m   |
| Regulus        |     7.81 km  |
| Alpha Centauri |    28.70 km  |
|----------------+--------------|

Could you read on Pluto?

Reading at sunset

I heard somewhere that Pluto receives more sunlight than you might think, enough to read by, and that sunlight on Pluto is much brighter than moonlight on Earth. I forget where I heard that, but I’ve done a back-of-the-envelope calculation to confirm that it’s true.

Pluto is about 40 AU from the sun, i.e. forty times as far from the sun as we are. The inverse square law implies Pluto gets 1/1600 as much light from the sun as Earth does.

Direct sun is between 30,000 and 100,000 lux (lumens per square meter). We’ll go with the high end because that’s an underestimate of how bright the sun would be if we didn’t have an atmosphere between us and the sun. So at high noon on Pluto you’d get at least 60 lux of sunlight.

Civil twilight is roughly enough light to read by, and that’s 3.4 lux. Moonlight is less than 0.3 lux.

60 lux would be comparable to indoor lighting in a hall or stairway.

Are coffee and wine good for you or bad for you?

wine coffee

One study will say that coffee is good for you and then another will say it’s bad for you. Ditto with wine and many other things. So which is it: are these things good for you or bad for you?

Probably neither. That is, these things that are endlessly studied with contradictory conclusions must not have much of an effect, positive or negative, or else studies would be more definitive.

John Ioannidis puts this well in his recent interview on EconTalk:

John Ioannidis: … We have performed hundreds of thousands of studies trying to look whether single nutrients are associated with specific types of disease outcomes. And, you know, you see all these thousands of studies about coffee, and tea, and all kind of —

Russ Roberts: broccoli, red meat, wine, …

John Ioannidis: — things that you eat. And they are all over the place. And they are all over the place, and they are always in the news. And I think it is a complete waste. We should just decide that we are talking about very small effects. The noise is many orders of magnitude more than the signal. If there is a signal. Maybe there is no signal at all. So, why are we keep doing this? We should just pause, and abandon this type of design for this type of question.

Misplacing a continent

There are many conventions for describing points on a sphere. For example, does latitude zero refer to the North Pole or the equator? Mathematicians tend to prefer the former and geoscientists the latter. There are also varying conventions for longitude.

Volker Michel describes this clash of conventions colorfully in his book on constructive approximation.

Many mathematicians have faced weird jigsaw puzzles with misplaced continents after using a data set from a geoscientist. If you ever get such figures, too, or if you are, for example, desperately searching South America in a data set but cannot find it, remember the remark you have just read to solve your problem.

Related posts:

Mercury and the bandwagon effect

Mercury

The study of the planet Mercury provides two examples of the bandwagon effect. In her new book Worlds Fantastic, Worlds Familiar, planetary astronomer Bonnie Buratti writes

The study of Mercury … illustrates one of the most confounding bugaboos of the scientific method: the bandwagon effect. Scientists are only human, and they impose their own prejudices and foregone conclusions on their experiments.

Around 1800, Johann Schroeter determined that Mercury had a rotational period of 24 hours. This view held for eight decades.

In the 1880’s, Giovanni Schiaparelli determined that Mercury was tidally locked, making one rotation on its axis for every orbits around the sun. This view also held for eight decades.

In 1965, radar measurements of Mercury showed that Mercury completes 3 rotations in every 2 orbits around the sun.

Studying Mercury is difficult since it is only visible near the horizon and around sunrise and sunset, i.e. when the sun’s light interferes. And it is understandable that someone would confuse a 3:2 resonance with tidal locking. Still, for two periods of eight decades each, astronomers looked at Mercury and concluded what they expected.

The difficulty of seeing Mercury objectively was compounded by two incorrect but satisfying metaphors. First that Mercury was like Earth, rotating every 24 hours, then that Mercury was like the moon, orbiting the sun the same way the moon orbits Earth.

Buratti mentions the famous Millikan oil drop experiment as another example of the bandwagon effect.

… Millikan’s value for the electron’s charge was slightly in error—he had used a wrong value for the viscosity of air. But future experimenters all seemed to get Millikan’s number. Having done the experiment myself I can see that they just picked those values that agreed with previous results.

Buratti explains that Millikan’s experiment is hard to do and “it is impossible to successfully do it without abandoning most data.” This is what I like to call acceptance-rejection modeling.

The name comes from the acceptance-rejection method of random number generation. For example, the obvious way to generate truncated normal random values is to generate (unrestricted) normal random values and simply throw out the ones that lie outside the interval we’d like to truncate to. This is inefficient if we’re truncating to a small interval, but it always works. We’re conforming our samples to a pre-determined distribution, which is OK when we do it intentionally. The problem comes when we do it unintentionally.

Photo of Mercury above via NASA

Freudian hypothesis testing

Sigmund Freud

In his paper Mindless statistics, Gerd Gigerenzer uses a Freudian analogy to describe the mental conflict researchers experience over statistical hypothesis testing. He says that the “statistical ritual” of NHST (null hypothesis significance testing) “is a form of conflict resolution, like compulsive hand washing.”

In Gigerenzer’s analogy, the id represents Bayesian analysis. Deep down, a researcher wants to know the probabilities of hypotheses being true. This is something that Bayesian statistics makes possible, but more conventional frequentist statistics does not.

The ego represents R. A. Fisher’s significance testing: specify a null hypothesis only, not an alternative, and report a p-value. Significance is calculated after collecting the data. This makes it easy to publish papers. The researcher never clearly states his hypothesis, and yet takes credit for having established it after rejecting the null. This leads to feelings of guilt and shame.

The superego represents the Neyman-Pearson version of hypothesis testing: pre-specified alternative hypotheses, power and sample size calculations, etc. Neyman and Pearson insist that hypothesis testing is about what to do, not what to believe. [1]

 

Click to learn more about Bayesian statistics consulting

 

I assume Gigerenzer doesn’t take this analogy too seriously. In context, it’s a humorous interlude in his polemic against rote statistical ritual.

But there really is a conflict in hypothesis testing. Researchers naturally think in Bayesian terms, and interpret frequentist results as if they were Bayesian. They really do want probabilities associated with hypotheses, and will imagine they have them even though frequentist theory explicitly forbids this. The rest of the analogy, comparing the ego and superego to Fisher and Neyman-Pearson respectively, seems weaker to me. But I suppose you could imagine Neyman and Pearson playing the role of your conscience, making you feel guilty about the pragmatic but unprincipled use of p-values.

* * *

[1] “No test based upon a theory of probability can by itself provide any valuable evidence of the truth or falsehood of a hypothesis. But we may look at the purpose of tests from another viewpoint. Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern behaviour in regard to them, in following which we insure that, in the long run of experience, we shall not often be wrong.”

Neyman J, Pearson E. On the problem of the most efficient tests of statistical hypotheses. Philos Trans Roy Soc A, 1933;231:289, 337.

Simulating seashells

In 1838, Rev. Henry Moseley discovered that a large number of mollusk shells and other shells can be described using three parameters: kT, and D.

simulated seashell

First imagine a thin wire running through the coil of the shell. In cylindrical coordinates, this wire follows the parameterization

r = ekt
z = Tt

If T = 0 this is a logarithmic spiral in the (r, θ) plane. For positive T, the spiral is stretched so that its vertical position is proportional to its radius.

Next we build a shell by putting a tube around this imaginary wire. The radius R of the tube at each point is proportional to the r coordinate: R = Dr.

The image above was created using k = 0.1, T = 2.791, and D = 0.8845 using Øyvind Hammer’s seashell generating software. You can download Hammer’s software for Windows and experiment with your own shell simulations by adjusting the parameters.

See also Hammer’s book and YouTube video: