Volume of a rose-shaped torus

Start with a rose, as described in the previous post:

Now spin that rose around a vertical line a distance R from the center of the rose. This makes a torus (doughnut) shape whose cross sections look like the rose above. You could think of having a cutout shaped like the rose above and extruding Play-Doh through it, then joining the ends in a loop.

Volume of rotation from rose

In case you’re curious, the image above was created with the following Mathematica command:

      RevolutionPlot3D[{4 + Cos[5 t] Cos[t], Cos[5 t] Sin[t]}, {t, 0, 2 Pi}]

What would the volume of resulting solid be?

This sounds like a horrendous calculus homework problem, but it’s actually quite easy. A theorem by Pappus of Alexandria (c. 300 AD) says that the volume is equal to the area times the circumference of the circle traversed by the centroid.

The area of a rose of the form r = cos(kθ) is simply π/2 if k is even and π/4 if k is odd. (Recall from the previous post that the number of petals is 2k if k is even and k if k is odd.)

The location of the centroid is easy: it’s at the origin by symmetry. If we rotate the rose around a vertical line x = R then the centroid travels a distance 2πR.

So putting all the pieces together, the volume is π²R if k is even and half that if k is odd. (We assume R > 1 so that the figure doesn’t intersect itself and some volume get counted twice.)

We can also find the surface area easily by another theorem of Pappus. The surface area is just the arc length of the rose times the circumference of the circle traced out by the centroid. The previous post showed how to find the arc length, so just multiply that by 2πR.

Length of a rose

The polar graph of r = cos(kθ) is called a rose. If k is even, the curve will trace out 2k petals as θ runs between 0 and 2π. If k is odd, it will trace out k petals, tracing each one twice. For example, here’s a rose with k = 5.

(I rotated the graph 36° so it would be symmetric about the vertical axis rather than the horizontal axis.)

The arc length of a curve in polar coordinates is given by

\int_a^b \sqrt{r^2 + \left(\frac{dr}{d\theta}\right)^2}\, d\theta

and so we can use this find the length. The integral doesn’t have a closed form in terms of elementary functions. Instead, the result turns out to use a special function E(x), the “complete elliptic integral of the second kind,” defined by

E(m) = \int_0^{\pi/2} \sqrt{1 - m \sin^2 x} \, dx

Here’s the calculation for the length of a rose:

\int_0^{2\pi} \sqrt{r^2 + (r')^2}\, d\theta &=& \int_0^{2\pi} \sqrt{cos^2 k\theta + k^2 \sin^2 k\theta} \, d\theta \\ &=& \int_0^{2\pi} \sqrt{cos^2 u + k^2 \sin^2 u} \, du \\ &=& 4 \int_0^{\pi/2} \sqrt{cos^2 u + k^2 \sin^2 u} \, du \\ &=& 4 \int_0^{\pi/2} \sqrt{1 + (k^2-1) \sin^2 u} \, du \\ &=& 4 E(-k^2 + 1)

So the arc length of the rose r = cos(kθ) with θ running from 0 to 2π is 4 E(-k² + 1). You can calculate E in SciPy with scipy.special.ellipe.

If we compute the length of the rose at the top of the post, we get 4 E(-24) = 21.01. Does that pass the sniff test? Each petal goes from r = 0 out to r = 1 and back. If the petal were a straight line, this would have length 2. Since the petals are curved, the length of each is a little more than 2. There are five petals, so the result should be a little more than 10. But we got a little more than 20. How can that be? Since 5 is odd, the rose with k = 5 traces each petal twice, so we should expect a value of a little more than 20, which is what we got.

As k gets larger, the petals come closer to being straight lines. So we should expect that 4E(-k² + 1) approaches 4k as k gets large. The following plot of E(-k² + 1) – k provides empirical support for this conjecture by showing that the difference approaches 0, and gives an idea of the rate of convergence. It should be possible to prove that, say, that E(-k²) asymptotically approaches k, but I haven’t done this.

Related posts:

When length equals area

The graph of hyperbolic cosine is called a catenary. A catenary has the following curious property: the length of a catenary between two points equals the area under the catenary between those two points.

The proof is surprisingly simple. Start with the following:

\sqrt{1 +\left(\frac{d}{dx} \cosh(x) \right)^2} = \sqrt{1 + \sinh^2(x)} = \cosh(x)

Now integrate the first and last expressions between two points a and b. Note that the former integral gives the arc length of cosh between a and b, and the later integral gives the area under the graph of cosh between a and b.

By the way, the most famous catenary may be the Gateway Arch in St. Louis, Missouri.

Gateway Arch at night

Solving systems of polynomial equations

In a high school algebra class, you learn how to solve polynomial equations in one variable, and systems of linear equations. You might reasonably ask “So when do we combine these and learn to solve systems of polynomial equations?” The answer would be “Maybe years from now, but most likely never.” There are systematic ways to solve systems of polynomial equations, but you’re unlikely to ever see them unless you study algebraic geometry.

Here’s an example from [1]. Suppose you want to find the extreme values of x³ + 2xyzx² on the unit sphere using Lagrange multipliers. This leads to the following system of polynomial equations where λ is the Lagrange multiplier.

\begin{eqnarray*} 3x^2 + 2yz - 2x\lambda &=& 0 \\ 2xz - 2y\lambda &=& 0 \\ 2xy - 2z - 2z\lambda &=& 0 \\ x^2 + y^2 + z^2 - 1 &=& 0 \end{eqnarray*}

There’s no obvious way to go about solving this system of equations. However, there is a systematic way to approach this problem using a “lexicographic Gröbner basis.” This transforms the problem from into something that looks much worse but that is actually easier to work with. And most importantly, the transformation is algorithmic. It requires some computation—there are numerous software packages for doing this—but doesn’t require a flash of insight.

The transformed system looks intimidating compared to the original:

\begin{eqnarray*} \lambda - \frac{3}{2}x - \frac{3}{2}yz - \frac{167616}{3835}z^6 - \frac{36717}{590}z^4 - \frac{134419}{7670}z^2 &=& 0 \\ x^2 + y^2 + z^2 - 1 &=& 0 \\ xy - \frac{19584}{3835}z^5 + \frac{1999}{295}z^3 - \frac{6403}{3835}z &=& 0 \\ xz + yz^2 - \frac{1152}{3835}z^5 - \frac{108}{295}z^3 + \frac{2556}{3835}z &=& 0 \\ y^3 + yz^2 - y -\frac{9216}{3835}z^5 + \frac{906}{295}z^3 - \frac{2562}{3835}z &=& 0 \\ y^2z - \frac{6912}{3835}z^5 -\frac{827}{295}z^3 -\frac{3839}{3835}z &=& 0 \\ yz^3 - yz - \frac{576}{59}z^6 + \frac{1605}{118}z^4 - \frac{453}{118}z^2 &=& 0 \\ z^7 - \frac{1763}{1152}z^5 + \frac{655}{1152}z^3 - \frac{11}{288}z &=& 0 \end{eqnarray*}

We’ve gone from four equations to eight, from small integer coefficients to large fraction coefficients, from squares to seventh powers. And yet we’ve made progress because the four variables are less entangled in the new system.

The last equation involves only z and factors nicely:

z\left(z^2 - \frac{4}{9}\right) \left(z^2 - \frac{11}{128}\right) = 0

This cracks the problem wide open. We can easily find all the possible values of z, and once we substitute values for z, the rest of the equations simplify greatly and can be solved easily.

The key is that Gröbner bases transform our problem into a form that, although it appears more difficult, is easier to work with because the variables are somewhat separated. Solving one variable, z, is like pulling out a thread that then makes the rest of the threads easier to separate.

Related: The great reformulation of algebraic geometry

* * *

[1] David Cox et al. Applications of Computational Algebraic Geometry: American Mathematical Society Short Course January 6-7, 1997 San Diego, California (Proceedings of Symposia in Applied Mathematics)

The 3n+1 problem and Benford’s law

This is the third, and last, of a series of posts on Benford’s law, this time looking at a famous open problem in computer science, the 3n + 1 problem, also known as the Collatz conjecture.

Start with a positive integer n. Compute 3n + 1 and divide by 2 repeatedly until you get an odd number. Then repeat the process. For example, suppose we start with 13. We get 3*13+1 = 40, and 40/8 = 5, so 5 is the next term in the sequence. 5*3 + 1 is 16, which is a power of 2, so we get down to 1.

Does this process always reach 1? So far nobody has found a proof or a counterexample.

If you pick a large starting number n at random, it appears that not only will the sequence terminate, the values produced by the sequence approximately follow Benford’s law (source). If you’re unfamiliar with Benford’s law, please see the first post in this series.

Here’s some Python code to play with this.

from math import log10, floor

def leading_digit(x):
    y = log10(x) % 1
    return int(floor(10**y))

# 3n+1 iteration
def iterates(seed):
    s = set()
    n = seed
    while n > 1:
        n = 3*n + 1
        while n % 2 == 0:
            n = n / 2
        s.add(n)
    return s

Let’s save the iterates starting with a large starting value:

it = iterates(378357768968665902923668054558637)

Here’s what we get and what we would expect from Benford’s law:

|---------------+----------+-----------|
| Leading digit | Observed | Predicted |
|---------------+----------+-----------|
|             1 |       46 |        53 |
|             2 |       26 |        31 |
|             3 |       29 |        22 |
|             4 |       16 |        17 |
|             5 |       24 |        14 |
|             6 |        8 |        12 |
|             7 |       12 |        10 |
|             8 |        9 |         9 |
|             9 |        7 |         8 |
|---------------+----------+-----------|

We get a chi-square of 12.88 (p = 0.116) and so we get a reasonable fit.

Here’s another run with a different starting point.

it = iterates(243963882982396137355964322146256)

which produces

|---------------+----------+-----------|
| Leading digit | Observed | Predicted |
|---------------+----------+-----------|
|             1 |       44 |        41 |
|             2 |       22 |        24 |
|             3 |       15 |        17 |
|             4 |       12 |        13 |
|             5 |       11 |        11 |
|             6 |        9 |         9 |
|             7 |       11 |         8 |
|             8 |        6 |         7 |
|             9 |        7 |         6 |
|---------------+----------+-----------|

This has a chi-square value of 2.166 (p = 0.975) which is an even better fit.

Golden angle

The golden angle is related to the golden ratio, but it is not as well known. And the relationship is not quite what you might think at first.

The golden ratio φ is (1 + √5)/2. A golden rectangle is one in which the ratio of the longer side to the shorter side is φ. Credit cards, for example, are typically golden rectangles.

You might guess that a golden angle is 1/φ of a circle, but it’s actually 1/φ2 of a circle. Let a be the length of an arc cut out of a circle by a golden angle and b be the length of its complement. Then by definition the ratio of b to a is φ. In other words, the golden angle is defined in terms of the ratio of its complementary arc, not of the entire circle. [1]

The video below has many references to the golden angle. It says that the golden angle is 137.5 degrees, which is fine given the context of a popular video. But this doesn’t explain where the angle comes from or give its exact value of 360/φ2 degrees.

[1] Why does this work out to 1/φ2? The ratio b/a equals φ, by definition. So the ratio of a to the whole circle is

a/(a + b) = a/(a + φa) = 1/(1 + φ) = 1/φ2

since φ satisfies the quadratic equation 1 + φ = φ2.

Flying through a 3D fractal

A Menger sponge is created by starting with a cube a recursively removing chunks of it. Draw a 3×3 grid on one face of the cube, then remove the middle square, all the way through the cube. Then do the same for each of the eight remaining squares. Repeat this process over and over, and do it for each face.

The holes are all rectangular, so it’s surprising that the geometry is so varied when you slice open a Menger sponge. For example, when you cut it on the diagonal, you can see stars! (I wrote about this here.)

I mentioned this blog post to a friend at Go 3D Now, a company that does 3D scanning and animation, and he created the video below. The video starts out by taking you through the sponge, then at about the half way part the sponge splits apart.

Computing harmonic numbers

The harmonic numbers are defined by

Harmonic numbers are sort of a discrete analog of logarithms since

\log n = \int_1^n \frac{1}{x} \, dx

As n goes to infinity, the difference between Hn and log n is Euler’s constant γ = 0.57721… [1]

How would you compute Hn? For small n, simply use the definition. But if n is very large, there’s a way to approximate Hn without having to do a large sum.

Since in the limit Hn – log n goes to γ, a crude approximation would be

H_n \approx \log n + \gamma

But we could do much better by adding a couple terms to the approximation above. [2] That is,

H_n \approx \log n + \gamma + \frac{1}{2n} - \frac{1}{12n^2}

The error in the approximation above is between 0 and 1/120n4.

So if you used this to compute the 1000th harmonic number, the error would be less than one part in 120,000,000,000,000. Said another way, for n = 1000 the approximation differs from the exact value in the 15th significant digit, approximately the resolution of floating point numbers (i.e. IEEE 754 double precision).

And the formula is even more accurate for larger n. If we wanted to compute the millionth harmonic number, the error in our approximation would be somewhere around the 26th decimal place.

* * *

[1] See Julian Havil’s excellent Gamma: Exploring Euler’s Constant. It’s popular-level book, but more sophisticated than most such books.

[2] There’s a sequence of increasingly accurate approximations that keep adding reciprocals of even powers of n, based on truncating an asymptotic series. See Concrete Mathematics for details.

Uniform distribution of powers mod 1

A few days ago I wrote about how powers of the golden ratio are nearly integers but powers of π are not. This post is similar but takes a little different perspective. Instead of looking at how close powers are to the nearest integers, we’ll look at how close they are to their floor, the largest integer below. Put another way, we’ll throw away the integer parts and look at the decimal parts.

First a theorem:

For almost all x > 1, the sequence (xn) for n = 1, 2, 3, … is u.d. mod 1. [1]

Here “almost all” is a technical term meaning that the set of x‘s for which the statement above does not hold has Lebesgue measure zero. The abbreviation “u.d.” stands for “uniformly distributed.” A sequence uniformly distributed mod 1 if the fractional parts of the sequence are distributed like uniform random variables.

Even though the statement holds for almost all x, it’s hard to prove for particular values of x. And it’s easy to find particular values of x for which the theorem does not hold.

From [1]:

… it is interesting to note that one does not know whether sequences such as (en), (πn), or even ((3/2)n) are u.d. mod 1 or not.

Obviously powers of integers are not u.d. mod 1 because their fractional parts are all 0. And we’ve shown before that powers of the golden ratio and powers of the plastic constant are near integers, i.e. their fractional parts cluster near 0 and 1.

The curious part about the quote above is that it’s not clear whether powers of 3/2 are uniformly distributed mod 1. I wouldn’t expect powers of any rational number to be u.d. mod 1. Either my intuition was wrong, or it’s right but hasn’t been proved, at least not when [1] was written.

The next post will look at powers of 3/2 mod 1 and whether they appear to be uniformly distributed.

* * *

[1] Kuipers and Niederreiter, Uniform Distribution of Sequences

Natural transformations

The ladder of abstractions in category theory starts with categories, then functors, then natural transformations. Unfortunately, natural transformations don’t seem very natural when you first see the definition. This is ironic since the original motivation for developing category theory was to formalize the intuitive notion of a transformation being “natural.” Historically, functors were defined in order to define natural transformations, and categories were defined in order to define functors, just the opposite of the order in which they are introduced now.

A category is a collection of objects and arrows between objects. Usually these “arrows” are functions, but in general they don’t have to be.

A functor maps a category to another category. Since a category consists of objects and arrows, a functor maps objects to objects and arrows to arrows.

A natural transformation maps functors to functors. Sounds reasonable, but what does that mean?

You can think of a functor as a way to create a picture of one category inside another. Suppose you have some category and pick out two objects in that category, A and B, and suppose there is an arrow f between A and B. Then a functor F would take A and B and give you objects FA and FB in another category, and an arrow Ff between FA and FB. You could do the same with another functor G. So the objects A and B and the arrow between them in the first category have counterparts under the functors F and G in the new category as in the two diagrams below.

A natural transformation α between F and G is something that connects these two diagrams into one diagram that commutes.

The natural transformation α is a collection of arrows in the new category, one for every object in the original category. So we have an arrow αA for the object A and another arrow αB for the object B. These arrows are called the components of α at A and B respectively.

Note that the components of α depend on the objects A and B but not on the arrow f. If f represents any other arrow from A to B in the original category, the same arrows αA and αB fill in the diagram.

Natural transformations are meant to capture the idea that a transformation is “natural” in the sense of not depending on any arbitrary choices. If a transformation does depend on arbitrary choices, the arrows αA and αB would not be reusable but would have to change when f changes.

The next post will discuss the canonical examples of natural and unnatural transformations.

Related: Applied category theory

How areas of math are connected

In my previous post, I discussed how number theory and topology relate to other areas of math. Part of that was to show a couple diagrams from  Jean Dieudonné’s book Panorama of Pure Mathematics, as seen by N. Bourbaki. That book has only small star-shaped diagrams considering one area of math at a time. I’ve created a diagram that pastes these local views into one grand diagram. Along the way I’ve done a little editing because the original diagrams were not entirely consistent.

Here’s a condensed view of the graph. You can find the full image here.

The graph is so dense that it’s hard to tell which areas have the most or least connections. Here are some tables to clarify that. First, counting how many areas an particular area contributes to, i.e. number of outgoing arrows.

|-------------------------------------+---------------|
| Area                                | Contributions |
|-------------------------------------+---------------|
| Homological algebra                 |            12 |
| Lie groups                          |            11 |
| Algebraic and differential topology |            10 |
| Categories and sheaves              |             9 |
| Commutative algebra                 |             9 |
| Commutative harmonic analysis       |             9 |
| Algebraic geometry                  |             8 |
| Differential geometry and manifolds |             8 |
| Integration                         |             8 |
| Partial differential equations      |             8 |
| General algebra                     |             7 |
| Noncommutative harmonic analysis    |             6 |
| Ordinary differential equations     |             6 |
| Spectral theory of operators        |             6 |
| Analytic geometry                   |             5 |
| Automorphic and modular forms       |             5 |
| Classical analysis                  |             5 |
| Mathematical logic                  |             5 |
| Abstract groups                     |             4 |
| Ergodic theory                      |             4 |
| Probability theory                  |             4 |
| Topological vector spaces           |             4 |
| General topology                    |             3 |
| Number theory                       |             3 |
| Von Neumann algebras                |             2 |
| Set theory                          |             1 |
|-------------------------------------+---------------|

Next, counting the sources each area draws on, i.e. counting incoming arrows.

|-------------------------------------+---------|
| Area                                | Sources |
|-------------------------------------+---------|
| Algebraic geometry                  |      13 |
| Number theory                       |      12 |
| Lie groups                          |      11 |
| Noncommutative harmonic analysis    |      11 |
| Algebraic and differential topology |      10 |
| Analytic geometry                   |      10 |
| Automorphic and modular forms       |      10 |
| Ordinary differential equations     |      10 |
| Ergodic theory                      |       9 |
| Partial differential equations      |       9 |
| Abstract groups                     |       8 |
| Differential geometry and manifolds |       8 |
| Commutative algebra                 |       6 |
| Commutative harmonic analysis       |       6 |
| Probability theory                  |       5 |
| Categories and sheaves              |       4 |
| Homological algebra                 |       4 |
| Spectral theory of operators        |       4 |
| Von Neumann algebras                |       4 |
| General algebra                     |       2 |
| Mathematical logic                  |       1 |
| Set theory                          |       1 |
| Classical analysis                  |       0 |
| General topology                    |       0 |
| Integration                         |       0 |
| Topological vector spaces           |       0 |
|-------------------------------------+---------|

Finally, connectedness, counting incoming and outgoing arrows.

|-------------------------------------+-------------|
| Area                                | Connections |
|-------------------------------------+-------------|
| Lie groups                          |          22 |
| Algebraic geometry                  |          21 |
| Algebraic and differential topology |          20 |
| Noncommutative harmonic analysis    |          17 |
| Partial differential equations      |          17 |
| Differential geometry and manifolds |          16 |
| Homological algebra                 |          16 |
| Ordinary differential equations     |          16 |
| Analytic geometry                   |          15 |
| Automorphic and modular forms       |          15 |
| Commutative algebra                 |          15 |
| Commutative harmonic analysis       |          15 |
| Number theory                       |          15 |
| Categories and sheaves              |          13 |
| Ergodic theory                      |          13 |
| Abstract groups                     |          12 |
| General algebra                     |          10 |
| Spectral theory of operators        |          10 |
| Probability theory                  |           9 |
| Integration                         |           8 |
| Mathematical logic                  |           6 |
| Von Neumann algebras                |           6 |
| Classical analysis                  |           5 |
| Topological vector spaces           |           4 |
| General topology                    |           3 |
| Set theory                          |           2 |
|-------------------------------------+-------------|

There are some real quirks here. The most foundational areas get short shrift. Set theory contributes to only one area of math?! Topological vector spaces don’t depend on anything, not even topology?!

I suspect Dieudonné had in mind fairly high-level contributions. Topological vector spaces, for example, obviously depend on topology, but not deeply. You could do research in the area while seldom drawing on more than an introductory topology course. Elementary logic and set theory are used everywhere, but most mathematicians have no need for advanced logic or set theory.

More math diagrams:

Mathematical balance of trade

Areas of math all draw on and contribute to each other. But there’s a sort of trade imbalance between areas. Some, like analytic number theory, are net importers. Others, like topology, are net exporters.

Analytic number theory uses the tools of analysis, especially complex analysis, to prove theorems about integers. The first time you see this it’s quite a surprise. But then you might expect that since analysis contributes to number theory, then number theory must contribute to analysis. But it doesn’t much.

Topology imports ideas from algebra. But it exports more than in imports, to algebra and to other areas. Topology started as a generalization of geometry. Along the way it developed ideas that extend far beyond geometry. For example, computer science, which ostensibly has nothing to do with whether you can continuously deform one shape into another, uses ideas from category theory that were developed initially for topology.

Here’s how Jean Dieudonné saw things. The following are my reconstructions of a couple diagrams from his book Panorama of Pure Mathematics, as seen by N. Bourbaki. An arrow from A to B means that A contributes to B, or B uses A.

For number theory, some of Dieudonné’s arrows go both ways, some only go into number theory. No arrows go only outward from number theory.

With topology, however, there’s a net flux out arrows going outward.

These diagrams are highly subjective. There’s plenty of room for disagreement. Dieudonné wrote his book 35 years ago, so you might argue that they were accurate at the time but need to be updated. In any case, the diagrams are interesting.

Update: See the next post of a larger diagram, sewing together little diagrams like the ones above.

Are polar coordinates backward?

I’d never given any thought to the order of polar coordinates until yesterday. They’re written (r, θ). That’s just how it is. But a friend pointed out two reasons why this bothers him.

First, r is typically a function of θ, just as y is typically a function of x. But in rectangular coordinates, the independent variable is the first element of a pair while in polar coordinates it is the second element.

Second, if you’re going to walk a mile northwest, how do you proceed? You first face northwest, then you walk for a mile. You don’t walk a mile to the east and then walk 135° counterclockwise along an arc centered where you started. That is, you set your θ first, then increase your r.

The (r, θ) convention isn’t going to change. Maybe the only take-away from this discussion is to appreciate someone’s confusion who sees polar coordinates for the first time and wants to reverse their order.

Related post: Why use j for imaginary unit

(I don’t use j for imaginary unit, except when writing Python code. The i notation is too firmly engraved in my brain. But I understand why j has advantages.)