Changing your mind

From Dorothy Sayers’ essay Why Work?

It is always strange and painful to have to change a habit of mind; though, when we have made the effort, we may find a great relief, even a sense of adventure and delight, in getting rid of the false and returning to the true.

Volume of a rose-shaped torus

Start with a rose, as described in the previous post:

Now spin that rose around a vertical line a distance R from the center of the rose. This makes a torus (doughnut) shape whose cross sections look like the rose above. You could think of having a cutout shaped like the rose above and extruding Play-Doh through it, then joining the ends in a loop.

Volume of rotation from rose

In case you’re curious, the image above was created with the following Mathematica command:

      RevolutionPlot3D[{4 + Cos[5 t] Cos[t], Cos[5 t] Sin[t]}, {t, 0, 2 Pi}]

What would the volume of resulting solid be?

This sounds like a horrendous calculus homework problem, but it’s actually quite easy. A theorem by Pappus of Alexandria (c. 300 AD) says that the volume is equal to the area times the circumference of the circle traversed by the centroid.

The area of a rose of the form r = cos(kθ) is simply π/2 if k is even and π/4 if k is odd. (Recall from the previous post that the number of petals is 2k if k is even and k if k is odd.)

The location of the centroid is easy: it’s at the origin by symmetry. If we rotate the rose around a vertical line x = R then the centroid travels a distance 2πR.

So putting all the pieces together, the volume is π²R if k is even and half that if k is odd. (We assume R > 1 so that the figure doesn’t intersect itself and some volume get counted twice.)

We can also find the surface area easily by another theorem of Pappus. The surface area is just the arc length of the rose times the circumference of the circle traced out by the centroid. The previous post showed how to find the arc length, so just multiply that by 2πR.

Length of a rose

The polar graph of r = cos(kθ) is called a rose. If k is even, the curve will trace out 2k petals as θ runs between 0 and 2π. If k is odd, it will trace out k petals, tracing each one twice. For example, here’s a rose with k = 5.

(I rotated the graph 36° so it would be symmetric about the vertical axis rather than the horizontal axis.)

The arc length of a curve in polar coordinates is given by

\int_a^b \sqrt{r^2 + \left(\frac{dr}{d\theta}\right)^2}\, d\theta

and so we can use this find the length. The integral doesn’t have a closed form in terms of elementary functions. Instead, the result turns out to use a special function E(x), the “complete elliptic integral of the second kind,” defined by

E(m) = \int_0^{\pi/2} \sqrt{1 - m \sin^2 x} \, dx

Here’s the calculation for the length of a rose:

\int_0^{2\pi} \sqrt{r^2 + (r')^2}\, d\theta &=& \int_0^{2\pi} \sqrt{cos^2 k\theta + k^2 \sin^2 k\theta} \, d\theta \\ &=& \int_0^{2\pi} \sqrt{cos^2 u + k^2 \sin^2 u} \, du \\ &=& 4 \int_0^{\pi/2} \sqrt{cos^2 u + k^2 \sin^2 u} \, du \\ &=& 4 \int_0^{\pi/2} \sqrt{1 + (k^2-1) \sin^2 u} \, du \\ &=& 4 E(-k^2 + 1)

So the arc length of the rose r = cos(kθ) with θ running from 0 to 2π is 4 E(-k² + 1). You can calculate E in SciPy with scipy.special.ellipe.

If we compute the length of the rose at the top of the post, we get 4 E(-24) = 21.01. Does that pass the sniff test? Each petal goes from r = 0 out to r = 1 and back. If the petal were a straight line, this would have length 2. Since the petals are curved, the length of each is a little more than 2. There are five petals, so the result should be a little more than 10. But we got a little more than 20. How can that be? Since 5 is odd, the rose with k = 5 traces each petal twice, so we should expect a value of a little more than 20, which is what we got.

As k gets larger, the petals come closer to being straight lines. So we should expect that 4E(-k² + 1) approaches 4k as k gets large. The following plot of E(-k² + 1) – k provides empirical support for this conjecture by showing that the difference approaches 0, and gives an idea of the rate of convergence. It should be possible to prove that, say, that E(-k²) asymptotically approaches k, but I haven’t done this.

Related posts:

When length equals area

The graph of hyperbolic cosine is called a catenary. A catenary has the following curious property: the length of a catenary between two points equals the area under the catenary between those two points.

The proof is surprisingly simple. Start with the following:

\sqrt{1 +\left(\frac{d}{dx} \cosh(x) \right)^2} = \sqrt{1 + \sinh^2(x)} = \cosh(x)

Now integrate the first and last expressions between two points a and b. Note that the former integral gives the arc length of cosh between a and b, and the later integral gives the area under the graph of cosh between a and b.

By the way, the most famous catenary may be the Gateway Arch in St. Louis, Missouri.

Gateway Arch at night

Solving systems of polynomial equations

In a high school algebra class, you learn how to solve polynomial equations in one variable, and systems of linear equations. You might reasonably ask “So when do we combine these and learn to solve systems of polynomial equations?” The answer would be “Maybe years from now, but most likely never.” There are systematic ways to solve systems of polynomial equations, but you’re unlikely to ever see them unless you study algebraic geometry.

Here’s an example from [1]. Suppose you want to find the extreme values of x³ + 2xyzx² on the unit sphere using Lagrange multipliers. This leads to the following system of polynomial equations where λ is the Lagrange multiplier.

\begin{eqnarray*} 3x^2 + 2yz - 2x\lambda &=& 0 \\ 2xz - 2y\lambda &=& 0 \\ 2xy - 2z - 2z\lambda &=& 0 \\ x^2 + y^2 + z^2 - 1 &=& 0 \end{eqnarray*}

There’s no obvious way to go about solving this system of equations. However, there is a systematic way to approach this problem using a “lexicographic Gröbner basis.” This transforms the problem from into something that looks much worse but that is actually easier to work with. And most importantly, the transformation is algorithmic. It requires some computation—there are numerous software packages for doing this—but doesn’t require a flash of insight.

The transformed system looks intimidating compared to the original:

\begin{eqnarray*} \lambda - \frac{3}{2}x - \frac{3}{2}yz - \frac{167616}{3835}z^6 - \frac{36717}{590}z^4 - \frac{134419}{7670}z^2 &=& 0 \\ x^2 + y^2 + z^2 - 1 &=& 0 \\ xy - \frac{19584}{3835}z^5 + \frac{1999}{295}z^3 - \frac{6403}{3835}z &=& 0 \\ xz + yz^2 - \frac{1152}{3835}z^5 - \frac{108}{295}z^3 + \frac{2556}{3835}z &=& 0 \\ y^3 + yz^2 - y -\frac{9216}{3835}z^5 + \frac{906}{295}z^3 - \frac{2562}{3835}z &=& 0 \\ y^2z - \frac{6912}{3835}z^5 -\frac{827}{295}z^3 -\frac{3839}{3835}z &=& 0 \\ yz^3 - yz - \frac{576}{59}z^6 + \frac{1605}{118}z^4 - \frac{453}{118}z^2 &=& 0 \\ z^7 - \frac{1763}{1152}z^5 + \frac{655}{1152}z^3 - \frac{11}{288}z &=& 0 \end{eqnarray*}

We’ve gone from four equations to eight, from small integer coefficients to large fraction coefficients, from squares to seventh powers. And yet we’ve made progress because the four variables are less entangled in the new system.

The last equation involves only z and factors nicely:

z\left(z^2 - \frac{4}{9}\right) \left(z^2 - \frac{11}{128}\right) = 0

This cracks the problem wide open. We can easily find all the possible values of z, and once we substitute values for z, the rest of the equations simplify greatly and can be solved easily.

The key is that Gröbner bases transform our problem into a form that, although it appears more difficult, is easier to work with because the variables are somewhat separated. Solving one variable, z, is like pulling out a thread that then makes the rest of the threads easier to separate.

Related: The great reformulation of algebraic geometry

* * *

[1] David Cox et al. Applications of Computational Algebraic Geometry: American Mathematical Society Short Course January 6-7, 1997 San Diego, California (Proceedings of Symposia in Applied Mathematics)

Generating pink noise

Different colors of noise are named by analogy with colors of light. Pink noise is between white noise and red noise.

White noise has equal power at all frequencies, just as white light is a combination of all the frequencies of the visible spectrum. The components of red noise are weighted toward low frequencies, just as red light is at the low end of the visible spectrum. Pink noise is weighted toward low frequencies too, but not as strongly as read. Specifically, the power in red noise drops off like 1/f² where f is frequency. The power in pink noise drops off like 1/f.

Generating pink noise is more complicated than you might think. The book Creating Noise, by Stefan Hollos and J. Richard Hollos, has a good explanation and C source code for generating pink noise and variations such as 1/f α noise for 0 < α < 1. If you want even more background, check out Recursive Digital Filters by the same authors.

If you’d like to hear what pink noise sounds like, here’s a sample that was created using the software in the book with a 6th order filter.

(Download)

Related posts:

Building software the right way

Yesterday a friend told me about a software project whose owners said “We’re going to do this the right way.” I told him I have two opposite reactions when I hear that:

  • Ooh, that sounds like fun!
  • Run away!

I’ve been on several projects where the sponsors have identified some aspect of the status quo that clearly needs improving. Working on a project that realizes these problems and is willing to address them sounds like fun. But then the project runs to the opposite extreme and creates something worse.

For example, most software development is sloppy. So a project reacts against this and becomes so formal that nobody can get any work done. In those settings I like to say “Hold off on buying a tux. Just start by tucking your shirt tail in. Maybe that’s as formal as you need to be.”

I’d be more optimistic about a project with more modest goals, say one that wants to move 50% of the way toward an ideal, rather than wanting to jump from one end of a spectrum to the other. Or even better, a project that has identified a direction they want to move in, and thinks in terms of experimenting to find their optimal position along that direction.

The 3n+1 problem and Benford’s law

This is the third, and last, of a series of posts on Benford’s law, this time looking at a famous open problem in computer science, the 3n + 1 problem, also known as the Collatz conjecture.

Start with a positive integer n. Compute 3n + 1 and divide by 2 repeatedly until you get an odd number. Then repeat the process. For example, suppose we start with 13. We get 3*13+1 = 40, and 40/8 = 5, so 5 is the next term in the sequence. 5*3 + 1 is 16, which is a power of 2, so we get down to 1.

Does this process always reach 1? So far nobody has found a proof or a counterexample.

If you pick a large starting number n at random, it appears that not only will the sequence terminate, the values produced by the sequence approximately follow Benford’s law (source). If you’re unfamiliar with Benford’s law, please see the first post in this series.

Here’s some Python code to play with this.

from math import log10, floor

def leading_digit(x):
    y = log10(x) % 1
    return int(floor(10**y))

# 3n+1 iteration
def iterates(seed):
    s = set()
    n = seed
    while n > 1:
        n = 3*n + 1
        while n % 2 == 0:
            n = n / 2
        s.add(n)
    return s

Let’s save the iterates starting with a large starting value:

it = iterates(378357768968665902923668054558637)

Here’s what we get and what we would expect from Benford’s law:

|---------------+----------+-----------|
| Leading digit | Observed | Predicted |
|---------------+----------+-----------|
|             1 |       46 |        53 |
|             2 |       26 |        31 |
|             3 |       29 |        22 |
|             4 |       16 |        17 |
|             5 |       24 |        14 |
|             6 |        8 |        12 |
|             7 |       12 |        10 |
|             8 |        9 |         9 |
|             9 |        7 |         8 |
|---------------+----------+-----------|

We get a chi-square of 12.88 (p = 0.116) and so we get a reasonable fit.

Here’s another run with a different starting point.

it = iterates(243963882982396137355964322146256)

which produces

|---------------+----------+-----------|
| Leading digit | Observed | Predicted |
|---------------+----------+-----------|
|             1 |       44 |        41 |
|             2 |       22 |        24 |
|             3 |       15 |        17 |
|             4 |       12 |        13 |
|             5 |       11 |        11 |
|             6 |        9 |         9 |
|             7 |       11 |         8 |
|             8 |        6 |         7 |
|             9 |        7 |         6 |
|---------------+----------+-----------|

This has a chi-square value of 2.166 (p = 0.975) which is an even better fit.

Cauchy, Benford, and a problem with NHST

Introduction

Samples from a Cauchy distribution nearly follow Benford’s law. I’ll demonstrate this below. The more data you see, the more confident you should be of this. But with a typical statistical approach, crudely applied NHST (null hypothesis significance testing), the more data you see, the less convinced you are.

This post assumes you’ve read the previous post that explains what Benford’s law is and looks at how well samples from a Weibull distribution follow that law.

This post has two purposes. First, we show that samples from a Cauchy distribution approximately follow Benford’s law. Second, we look at problems with testing goodness of fit with NHST.

Cauchy data

We can reuse the code from the previous post to test Cauchy samples, with one modification. Cauchy samples can be negative, so we have to modify our leading_digit function to take an absolute value.

      def leading_digit(x):
          y = log10(abs(x)) % 1
          return int(floor(10**y))

We’ll also need to import cauchy from scipy.stats and change where we draw samples to use this distribution.

      samples = cauchy.rvs(0, 1, N)

Here’s how a sample of 1000 Cauchy values compared to the prediction of Benford’s law:

|---------------+----------+-----------|
| Leading digit | Observed | Predicted |
|---------------+----------+-----------|
|             1 |      313 |       301 |
|             2 |      163 |       176 |
|             3 |      119 |       125 |
|             4 |       90 |        97 |
|             5 |       69 |        79 |
|             6 |       74 |        67 |
|             7 |       63 |        58 |
|             8 |       52 |        51 |
|             9 |       57 |        46 |
|---------------+----------+-----------|

Here’s a bar graph of the same data.Bar graph of Cauchy leading digits compared to Benford's law

Problems with NHST

A common way to measure goodness of fit is to use a chi-square test. The null hypothesis would be that the data follow a Benford distribution. We look at the chi-square statistic for the observed data, based on a chi-square distribution with 8 degrees of freedom (one less than the number of categories, which is 9 because of the nine digits). We compute the p-value, the probability of seeing a chi-square statistic this larger or larger, and reject our null hypothesis if this p-value is too small.

Here’s how our chi-square values and p-values vary with sample size.

|-------------+------------+---------|
| Sample size | chi-square | p-value |
|-------------+------------+---------|
|          64 |     13.542 |  0.0945 |
|         128 |     10.438 |  0.2356 |
|         256 |     13.002 |  0.1118 |
|         512 |      8.213 |  0.4129 |
|        1024 |     10.434 |  0.2358 |
|        2048 |      6.652 |  0.5745 |
|        4096 |     15.966 |  0.0429 |
|        8192 |     20.181 |  0.0097 |
|       16384 |     31.855 | 9.9e-05 |
|       32768 |     45.336 | 3.2e-07 |
|-------------+------------+---------|

The p-values eventually get very small, but they don’t decrease monotonically with sample size. This is to be expected. If the data came from a Benford distribution, i.e. if the null hypothesis were true, we’d expect the p-values to be uniformly distributed, i.e. they’d be equally likely to take on any value between 0 and 1. And not until the two largest samples do we see values that don’t look consistent with uniform samples from [0, 1].

In one sense NHST has done its job. Cauchy samples do not exactly follow Benford’s law, and with enough data we can show this. But we’re rejecting a null hypothesis that isn’t that interesting. We’re showing that the data don’t exactly follow Benford’s law rather than showing that they do approximately follow Benford’s law.

Weibull distribution and Benford’s law

Introduction to Benford’s law

In 1881, Simon Newcomb noticed that the edges of the first pages in a book of logarithms were dirty while the edges of the later pages were clean. From this he concluded that people were far more likely to look up the logarithms of numbers with leading digit 1 than of those with leading digit 9. Frank Benford studied the same phenomena later and now the phenomena is known as Benford’s law, or sometime the Newcomb-Benford law.

A data set follows Benford’s law if the proportion of elements with leading digit d is approximately

log10((d + 1)/d).

You could replace “10” with b if you look at the leading digits in base b.

Sets of physical constants often satisfy Benford’s law, as I showed here for the constants defined in SciPy.

Incidentally, factorials satisfy Benford’s law exactly in the limit.

Weibull distributions

The Weibull distribution is a generalization of the exponential distribution. It’s a convenient distribution for survival analysis because it can have decreasing, constant, or increasing hazard, depending on whether the value of a shape parameter γ is less than, equal to, or greater than 1 respectively. The special case of constant hazard, shape γ = 1, corresponds to the exponential distribution.

Weibull and Benford

If the shape parameter of a Weibull distributions is “not too large” then samples from that distribution approximately follow Benford’s law (source). We’ll explore this statement with a little Python code.

SciPy doesn’t contain a Weibull distribution per se, but it does have support for a generalization of the Weibull known as the exponential Weibull. The latter has two shape parameters. We set the first of these to 1 to get the ordinary Weibull distribution.

      from math import log10, floor
      from scipy.stats import exponweib

      def leading_digit(x):
          y = log10(x) % 1
          return int(floor(10**y))

      def weibull_stats(gamma):

          distribution = exponweib(1, gamma)
          N = 10000
          samples = distribution.rvs(N)
          counts = [0]*10
          for s in samples:
              counts[ leading_digit(s) ] += 1

      print (counts)

Here’s how the leading digit distribution of a simulation of 10,000 samples from an exponential (Weibull with γ = 1) compares to the distribution predicted by Benford’s law.

      |---------------+----------+-----------|
      | Leading digit | Observed | Predicted |
      |---------------+----------+-----------|
      |             1 |     3286 |      3010 |
      |             2 |     1792 |      1761 |
      |             3 |     1158 |      1249 |
      |             4 |      851 |       969 |
      |             5 |      754 |       792 |
      |             6 |      624 |       669 |
      |             7 |      534 |       580 |
      |             8 |      508 |       511 |
      |             9 |      493 |       458 |
      |---------------+----------+-----------|

Looks like a fairly good fit. How could we quantify the fit so we can compare how the fit varies with the shape parameter? The most common approach is to use the chi-square goodness of fit test.

      def chisq_stat(O, E):
          return sum( [(o - e)**2/e for (o, e) in zip(O, E)] )

Here “O” stands for “observed” and “E” stands for “expected.” The observed counts are the counts we actually saw. The expected values are the values Benford’s law would predict:

      expected = [N*log10((i+1)/i) for i in range(1, 10)] 

Note that we don’t want to pass counts to chisq_stat but counts[1:] instead. This is because counts starts with 0 index, but leading digits can’t be 0 for positive samples.

Here are the chi square goodness of fit statistics for a few values of γ. (Smaller is better.)

      |-------+------------|
      | Shape | Chi-square |
      |-------+------------|
      |   0.1 |      1.415 |
      |   0.5 |      9.078 |
      |   1.0 |     69.776 |
      |   1.5 |    769.216 |
      |   2.0 |   1873.242 |
      |-------+------------|

This suggests that samples from a Weibull follow Benford’s law fairly well for shape γ < 1, i.e. for the case of decreasing hazard.

Related posts

Golden angle

The golden angle is related to the golden ratio, but it is not as well known. And the relationship is not quite what you might think at first.

The golden ratio φ is (1 + √5)/2. A golden rectangle is one in which the ratio of the longer side to the shorter side is φ. Credit cards, for example, are typically golden rectangles.

You might guess that a golden angle is 1/φ of a circle, but it’s actually 1/φ2 of a circle. Let a be the length of an arc cut out of a circle by a golden angle and b be the length of its complement. Then by definition the ratio of b to a is φ. In other words, the golden angle is defined in terms of the ratio of its complementary arc, not of the entire circle. [1]

The video below has many references to the golden angle. It says that the golden angle is 137.5 degrees, which is fine given the context of a popular video. But this doesn’t explain where the angle comes from or give its exact value of 360/φ2 degrees.

[1] Why does this work out to 1/φ2? The ratio b/a equals φ, by definition. So the ratio of a to the whole circle is

a/(a + b) = a/(a + φa) = 1/(1 + φ) = 1/φ2

since φ satisfies the quadratic equation 1 + φ = φ2.

What personality classifications have in common

There are many ways to divide people into four personality types, from the classical—sanguine, choleric, melancholic, and phlegmatic—to contemporary systems such as the DISC profile. The Myers-Briggs system divides people into sixteen personality types. I just recently ran across the “enneagram,” an ancient system for dividing people into nine categories.

There’s one thing advocates of all the aforementioned systems agree on: the number of basic personality types is a perfect square.

Denver airport, Weierstrass, and A&S

Last night I was driving toward the Denver airport and the airport reminded me of the cover of Abramowitz and Stegun’s Handbook of Mathematical Functions.

Here’s the airport:

Denver airport

And here’s the book cover:

I’ve written about the image on book cover before. Someone asked me what function it graphed and I decided it was probably the Weierstrass ℘ function.

For more on Weierstrass’ elliptic function and why I think that’s what’s on the cover of A&S, see this post.

Photo of Denver airport via Wikipedia.

Resisting simplicity

As much as we admire simplicity and strive for simplicity, something in us isn’t happy when we achieve it.

Sometimes we’re disappointed with a simple solution because, although we don’t realize it yet, we didn’t properly frame the problem it solves.

I’ve been in numerous conversations where someone says effectively, “I understand that 2+3 = 5, but what if we made it 5.1?” They really want an answer of 5.1, or maybe larger, for reasons they can’t articulate. They formulated a problem whose solution is to add 2 and 3, but that formulation left out something they care about. In this situation, the easy response to say is “No, 2+3 = 5. There’s nothing we can do about that.” The more difficult response is to find out why “5” is an unsatisfactory result.

Sometimes we’re uncomfortable with a simple solution even though it does solve the right problem.

If you work hard and come up with a simple solution, it may look like you didn’t put in much effort. And if someone else comes up with the simple solution, you may look foolish.

Sometimes simplicity is disturbing. Maybe it has implications we have to get used to.

Update: A couple people have replied via Twitter saying that we resist simplicity because it’s boring. I think beneath that is that we’re not ready to move on to a new problem.

When you’re invested in a problem, it can be hard to see it solved. If the solution is complicated, you can keep working for a simpler solution. But once someone finds a really simple solution, it’s hard to justify continuing work in that direction.

A simple solution is not something to dwell on but to build on. We want some things to be boringly simple so we can do exciting things with them. But it’s hard to shift from producer to consumer: Now that I’ve produced this simple solution, and still a little sad that it’s wrapped up, how can I use it to solve something else?

Related posts: