More triangle inequalities

Yesterday I wrote about a triangle inequality discovered by Paul Erdős.

Let P be a point inside a triangle ABC. Let xyz be the distances from P to the vertices and let pqr, be the distances to the sides. Then Erdős’ inequality says

x + y + z ≥ 2(p + q + r).

Using the same notation, here are four more triangle inequalities discovered by Oppenheim [1].

  • px + qy + rz ≥ 2(qr + rp + pq)
  • yz + zx + xy ≥ 4(qr + rp + pq),
  • xyz ≥ 8pqr
  • 1/p+ 1/q + 1/r ≥ 2(1/x + 1/y + 1/z)

[1] A. Oppenheim. The Erdös Inequality and Other Inequalities for a Triangle. The American Mathematical Monthly. Vol. 68, No. 3 (Mar., 1961), pp. 226–230

Area of unit disk under a univalent function

Let D be the unit disk in the complex plane and let f be a univalent function on D, meaning it is analytic and one-to-one on D.

There is a simple way to compute the area of f(D) from the coefficients in its power series.

If

f(z) = \sum_{n=0}^\infty c_n z^n

then

\text{area}\left(f(D)\right) = \int_D |f^\prime(z)|^2 \, dx \,dy = \pi \sum_{n=1}^\infty n |c_n|^2

The first equality follows from the change of variables theorem for functions of two variables and applying the Cauchy-Riemann equations to simplify the Jacobian. The second equality is a straight-forward calculation that you can work out in polar coordinates.

Application

Let’s apply this to what I called the minimalist Mandelbrot set the other day.

The orange disk has radius 1/4 and so its area is simply π/16.

Finding the area of the blue cardioid takes more work, but the theorem above makes it easy. The cardioid is the image of the set {α : |α| < ½} under the map f(z) = zz². To apply the theorem above we need the domain to be the unit disk, not the unit disk times ½, so we define

f(z) = \frac{1}{2} z - \frac{1}{4} z^2

as a function on the unit disk. Now c1 = ½ and c2 = −¼ and so the area of f(D) = 3π/8.

I said in the earlier post that the minimalist Mandelbrot set makes up most of the Mandelbrot set. Now we can quantify that. The area of the minimalist Mandelbrot set is 7π/16 = 1.3744. The area of the Mandelbrot set is 1.5065, so the minimalist set shown above makes up over 91% of the total area.

Related posts

Random samples from a polygon

Ted Dunning left a comment on my post on random sampling from a triangle saying you could extend this to sampling from a polygon by dividing the polygon into triangles, and selecting a triangle each time with probability proportional to the triangle’s area.

To illustrate this, let’s start with a irregular pentagon.

To pick a point inside, I used the centroid, the average of the vertices. Connecting the centroid to each of the vertices splits the pentagon into triangles. (Here I implicitly used the fact that this pentagon is convex. The centroid of a non-convex polygon could be outside the polygon.)

We can find the area of the triangles using Heron’s rule.

Here’s what we get for random samples.

A triangle inequality by Erdős

Plane geometry has been studied since ancient times, and yet new results keep being discovered millennia later, including elegant results. It’s easy to come up with a new result by proving a complicated theorem that Euclid would not have cared about. It’s more impressive to come up with a new theorem that Euclid would have understood and found interesting.

Paul Erdős conjectured another triangle inequality in 1935 which was proved by Mordell and Barrow in 1937 [1].

Let P be a point inside a triangle ABC. Let x, y, z be the distances from P to the vertices and let p, q, r, be the distances to the sides. Then

xyz ≥ 2(pqr)

with equality only if P is the center of an equilateral triangle [2]. In the figure above, the theorem says the dashed blue lines together are more than twice as long as the solid red lines.

How far apart are the left and right sides of the inequality? This was the motivation for the previous post on selecting random points from a triangle. I wanted to generate random points and compare the two sides of the Erdős-Mordell-Barrow inequality.

We can visualize the inequality by generating random points inside the triangle and plotting the points with a color that indicates the inequality gap, darker blue corresponding to a larger gap.

This shows the inequality is sharper in the middle of the triangle than near the vertices.

[1] Problem 3740, American Mathematical Monthly, 44 (1937) 252-254.

[2] You could interpret this as a theorem comparing barycentric and trilinear coordinates.

Randomly selecting points inside a triangle

If you have a triangle with vertices AB, and C, how would you generate random points inside the triangle ABC?

Barycentric coordinates

One idea would be to use barycentric coordinates.

  1. Generate random numbers α, β, and γ from the interval [0, 1].
  2. Normalize the points to have sum 1 by dividing each by their sum.
  3. Return αA + βB + γC.

This generates points inside the triangle, but not uniformly.

Accept-reject

Another idea is to use an accept-reject method. Draw a rectangle around the triangle, generate points in the square, and throw them away if they land outside the triangle.

An advantage to this method is that it obviously works because it doesn’t rely on anything subtle. Three cheers for brute force!

The method is fairly efficient; on average only half the points will be rejected.

A disadvantage to all accept-reject methods is that they have variable runtime, though this only matters in some applications.

Accept-flip

There is a clever variation on the accept-reject method. Create a parallelogram by attaching a flipped copy of the triangle. Now randomly sample from the parallelogram. Every sample point will either land inside the original triangle or in its flipped twin. If it landed in the original triangle, keep it. If it landed in the twin, flip it back over and use that point.

This is like an accept-reject method, except there’s no waste. Every point is kept, possibly after flipping.

You can find code for implementing this method on Stack Overflow.

Barycentric coordinates fixed

Felix left a note in the comments that the barycentric approach would work if you draw the random weights from an exponential distribution rather than a uniform distribution. That goes against my intuition. I thought about using a different distribution on the weights, but I didn’t work out what it would need to be, but I did not expect it to be exponential. Apparently it works. Here’s a proof by plot:

Related posts

A mental random number generator

Man thinking of random numbers

George Marsaglia was a big name in random number generation. I’ve referred to his work multiple times here, most recently in this article from March on randomly generating points on a sphere. He is best remembered for his DIEHARD battery of tests for RNG quality. See, for example, this post.

I recently learned about a mental RNG that Marsaglia developed. It’s not great, but it’s pretty good for something that’s easy to mentally compute. The output is probably better than if you simply tried to make up random numbers; people are notoriously bad at doing that. Hillel Wayne explores Marsaglia’s method in detail here.

Here’s a summary the generator in Python.

state = 42 # Set the seed to any two digit number

def random_digit():
    tens = lambda n: n // 10
    ones = lambda n: n % 10
    global state
    state = tens(state) + 6*ones(state)
    return ones(state)

Related posts

Bech32 encoding

Bech32 is an algorithm for encoding binary data, specifically Bitcoin addresses, in a human-friendly way using a 32-character alphabet. The Bech32 alphabet includes lowercase letters and digits, removing the digit 1, and the letters b, i, and o.

The Bech32 alphabet design is similar to that for other coding schemes in that it seeks to remove letters that are visually similar. However, the order of the alphabet is unusual:

    qpzry9x8gf2tvdw0s3jn54khce6mua7l

I’ll explain the motivation for the ordering shortly. Here’s the same alphabet in conventional order, not the order used by Bech32:

    023456789acdefghjklmnpqrstuvwxyz

The Bech32 algorithm does more than represent 5-bit words as alphanumeric characters. The full algorithm is documented here. The algorithm is applied to the output of a RIPEMD-160 hash, and so the number of bits is a multiple of 5.

The Bech32 algorithm includes a BCH (Bose–Chaudhuri–Hocquenghem) checksum, and takes its name from this checksum.

Note that the BCH checksum is not a cryptographic hash; it’s an error correction code. Its purpose is to catch mistakes, not to thwart malice. Contrast this with Base58check that uses part of a SHA256 hash. Base58check can detect errors, but it cannot correct errors.

I’ve had a surprisingly hard time finding the specifics of the BCH(nd) code that Bech32 uses. Every source I’ve found either does not give values of n and d or gives different values. In any case, the order of the Bech32 alphabet was chosen so that mistakes humans are more like to make are more likely mistakes that BCH can detect and correct.

Related posts

Mandelbrot area and escape times

The two latest posts have been about the Mandelbrot set, the set of complex numbers c such that iterations of

f(z) = z² + c

remain bounded. It’s easy to see that the sequence of iterates will go off to infinity if at any step |z| > 2.

For each c, we can look at the escape time, the number of iterations it takes before an value has magnitude greater than 2. The Mandelbrot set is the set of points with infinite escape time.

In the plot below, I sampled 1,000,000 points at random and colored each according to the corresponding escape time. The lighter blue color corresponds to earlier escape.

The white points in the interior have an escape time of more than 10 (and possibly infinity).

When plotting the Mandelbrot set, how do you know whether a point will never escape? You don’t, not with certainty. But by looking at the distribution of escape times you can see that most points that are going to escape do so quickly. For example, after sampling 1,000,000 points, doing 2000 iterations for each point, 99.9% of points that escape do so within 248 iterations.

If we were to plot a histogram of the escape times it would simply look like a spike on the left end. Using a logarithmic scale for the vertical axis lets us see more of what’s going on. Note that the escape times decrease rapidly, even on a log scale.

By counting what portion of points do not escape, we can calculate a Monte Carlo estimate of the area of the Mandelbrot set, and in fact this is the best way to calculate the area. There is an exact expression for the area in terms of an infinite sum, but the sum converges so slowly that it is unusable; Monte Carlo sampling is faster.

Using 1,000,000 random values of c, a total of 905,444 points from the square [−2, 2] × [−2, 2] escape within 2,000 iterations. Since the square has area 16, this gives us an estimate

A = 16 × (1000000 − 905,444)/1000000 = 1.5129.

A 95% confidence interval for the area is

A ± 16 × 1.96 × (p (1 − p)/1000000) = (1.5082, 1.5176)

where p = 905444/1000000.

A study [1] found the area to be 1.506484 ± 0.000004, which is just outside of our confidence interval. Some of the difference is due to chance, and some of it is due to our stopping at 2000 iterations. If we had gone done more iterations, more points would escape (though not very many, for reasons explained above).

Update: I redid my experiment with 10,000,000 samples and 32,000 iterations and got an area estimate of 1.507954 with a 95% confidence interval (1.505056, 1.510851), which would include the estimate cited above. However, for reasons I give here I suspect the confidence interval in [1] is not as small as reported.

[1] See OEIS A098403

Mandelbrot points of every period

As mentioned in the previous post, most of the area in the Mandelbrot set comes from two regions. The largest is the blue cardioid region below and the next largest is the orange disk.

The blue cardioid is the set of points c such that iterations of z² + c converge to a fixed point. The orange disk is the set of points that converge to a cycle of period 2.

We can watch the convergence by picking a point and drawing a line between the iterates with the following Python function.

import matplotlib.pyplot as plt

def plot_iterates(c):
    f = lambda z: z**2 + c
    z = c
    s = [z]
    for i in range(1, 20):
        z = f(z)
        s.append(z)
        plt.plot([s[i-1].real, s[i].real], [s[i-1].imag, s[i].imag], color="C0")
    plt.gca().set_aspect("equal")
    plt.show()

First, we start with a point in the cardioid that converges to a fixed point. We could say it converges to a cycle of period 1.

plot_iterates(0.05 + 0.3j)

Next, lets start at a point in the orange disk with period 2.

plot_iterates(-1 + 0.2j)

We can find points with any period n. Here’s one of period 3.

plot_iterates(-0.13 + 0.66j)

And here’s one of period 4.

plot_iterates(0.256 + 0.55j)


The values of c that converge to a cycle with period q are “bulbs” tangent to the cardioid. There are φ(q) bulbs where φ is Euler’s totient function, i.e. φ(q) is the number of positive integers less than and relatively prime to q. The orange disk is the first bulb, the only one of its kind because φ(2) = 1. There are two bulbs with period 3 because φ(3) = 2.

For every p relatively prime to q there is a bulb tangent to the cardioid at w(1 − w) where

w = ½ exp(2πip/q)

and the diameter of the bulb is approximately 1/q.

These bulbs don’t account for everything in the Mandelbrot set. There’s a lot more to explore.

By the way, here is a plot of the Mandelbrot set created by selecting values of c at random and testing whether iterations of z² + c remain bounded.

Minimalist Mandelbrot set

The Mandelbrot set is one of the most famous fractals. It consists of the complex numbers c such that iterations of

f(z) = z² + c

are bounded. The plot of the Mandelbrot set is a complicated image—it’s a fractal, after all—and yet there’s a simple description of an first approximation to the Mandelbrot set.

As shown in [1], the image of the disk

{α : |α| < ½}

under the map taking z to zz² gives the set of all points where iterations of f converge to a point. This is the blue cardioid region. Call it A.

Also show in [1] is that the points

B = {c : |1 + c| < ¼}

are the ones such that f(f(z)) converges to a fixed point.

These two parts form the core of the Mandelbrot set. The blue heart-shaped region on the right is A and the orange disk on the left is B.

The rest of the Mandelbrot set are the points where iterations of f remain bounded but have more complicated behavior than the points in A or B.

[1] Alan F. Beardon. Iteration of Rational Functions. Springer-Verlag, 1991.