Bounding the perimeter of a triangle between circles

Suppose you have a triangle and you know the size of the largest circle that can fit inside (the incircle) and the size of the smallest circle that can fit outside (the circumcircle). How would you estimate the perimeter of the triangle?

In terms of the figure below, if you know the circumference of the red and blue circles, how could you estimate the perimeter of the black triangle?

Triangle with inscribed and circumscribed circles

A crude estimate is that the triangle perimeter must be greater than the incircle circumference and less than the circumcircle circumference. But we can do better.

Gerretsen’s inequalities

It is conventional in this kind of problem to work with the semiperimeter of the triangle, half the perimeter, rather than the perimeter. Let r be the radius of the incircle, R the radius of the circumcircle, and s the semiperimeter of the triangle. Then Gerretsen’s inequalities say

16Rr − 5r² ≤ s² ≤ 4R² + 4Rr + 3r²

In the figure above,

r = 1.058, R = 2.5074, s = 6.1427

and Gerretsen’s inequalities give

36.8532  ≤ s² = 37.7327 ≤ 39.1200.

In the case of an equilateral triangle, Gerretsen’s inequalities are in fact equations.

Note that Gerretsen’s inequalities say nothing about the centers of the circles. An incircle must be inside a circumcircle, but for a variety of triangles with the centers of the two circles in different relations you have the same bounds.

Kooi’s inequality

Kooi’s inequality [2] gives another upper bound for the perimeter of the triangle:

s² ≤ ½ R(4R + r)² / (2Rr)

which in the example above gives a tighter upper bound, 38.9515.

Kooi’s upper bound is uniformly better than the upper bound half of Gerretsen’s inequalities. But further refinements are possible [3].

Related posts

[1] J. C. H. Gerretsen, Ongelijkheden in de driehoek. Nieuw Tijdschr 41 (1953), 1–7.

[2] O. Kooi, Inequalities for the triangle, Simon Stevin, 32 (1958), 97–101.

[3] Martin Lukarevski and Dan Stefan Marinescu. A refinement of the Kooi’s inequality, Mittenpunkt and applications. Journal of Mathematical Applications. Volume 13, Number 3 (2019), 827–832

Music of the spheres

The idea of “music of the spheres” dates back to the Pythagoreans. They saw an analogy between orbital frequency ratios and musical frequency ratios.

HD 110067 is a star 105 light years away that has six known planets in orbital resonance. The orbital frequencies of the planets are related to each other by small integer ratios.

The planets, starting from the star, are labeled b, c, d, e, f, and g. In 9 “years”, from the perspective of g, the planets complete 54, 36, 24, 16, 12, and 9 orbits respectively. So the ratio of orbital frequencies between each pair of consecutive planets are either 3:2 or 4:3. In musical terms, these ratios are fifths and fourths.

In the chord below, the musical frequency ratios are the same as the orbital frequency rations in the HD 110067 system.

Here’s what the chord sounds like on a piano:


Related posts

The Real Book

I listened to the 99% Invisible podcast about The Real Book this morning and thought back to my first copy.

My first year in college I had a jazz class, and I needed to get a copy of The Real Book, a book of sheet music for jazz standards. The book that was illegal at the time, but there was no legal alternative, and I had no scruples about copyright back then.

When a legal version came out later I replaced my original book with the one in the photo below.

The New Real Book Legal

The podcast refers to “When Hal Leonard finally published the legal version of the Real Book in 2004 …” but my book says “Copyright 1988 Sher Music Co.” Maybe Hal Leonard published a version in 2004, but there was a version that came out years earlier.

The podcast also says “Hal Leonard actually hired a copyist to mimic the old Real Book’s iconic script and turn it into a digital font.” But my 1988 version looks not unlike the original. Maybe my version used a kind of typesetting common in jazz, but the Hal Leonard version looks even more like the original handwritten sheet music.

Substack replacing email subscription

The service that sent out my email to blog subscribers stopped working a couple weeks ago, and I’m trying out Substack as a replacement. You can find my Substack account here.

My plan for now is to use this account to make blog post announcements, maybe once a week, with a little introductory commentary for each link. I expect to adjust course in response to feedback. Maybe I’ll write some posts just on Substack, but for now I intend to use it as a place to post blog round-ups.

The Stubstack is free and I have no intention to ever charge for it. It’s possible I might make some premium add-on in the future, but I doubt it. If I did, the blog post round-up would remain free.


Determinant of an infinite matrix

What does the infinite determinant

D = \left| \begin{array}{lllll} 1 & a_1 & 0 & 0 & \cdots \\ b_1 & 1 & a_2 & 0 & \cdots \\ 0 & b_2 & 1 & a_3 & \\ 0 & 0 & b_3 & 1 & \ddots \\ \vdots & \vdots & & \ddots & \ddots \\ \end{array} \right|

mean and when does it converge?

The determinant D above is the limit of the determinants Dn defined by

D_n = \left| \begin{array}{lllll} 1 & a_1 & & & \\ b_1 & 1 & a_2 & & \\ & b_2 & 1 & \ddots & \\ & & \ddots & \ddots & a_{n-1} \\ & & & b_{n-1} & 1 \\ \end{array} \right|

If all the a‘s are 1 and all the b‘s are −1 then this post shows that Dn = Fn, the nth Fibonacci number. The Fibonacci numbers obviously don’t converge, so in this case the determinant of the infinite matrix does not converge.

In 1895, Helge von Koch said in a letter to Poincaré that the infinite determinant is absolutely convergent if and only if the sum

\sum_{i=1}^\infty a_ib_i

is absolutely convergent. A proof is given in [1].

The proof shows that the Dn are bounded by

\prod_{i=1}^n\left(1 + |a_ib_i| \right)

and so the infinite determinant converges if the corresponding infinite product converges. And a theorem on infinite products says

\prod_{i=1}^\infty\left(1 + |a_ib_i| \right)

converges absolute if the sum in Koch’s theorem converges. In fact,

\prod_{i=1}^\infty\left(1 + |a_ib_i| \right) \leq \exp\left(\sum_{i=1}^\infty |a_ib_i| \right )

and so we have an upper bound on the infinite determinant.

Related post: Triadiagonal systems, determinants, and cubic splines

[1] A. A. Shaw. H. von Koch’s First Lemma and Its Generalization. The American Mathematical Monthly, April 1931, Vol. 38, No. 4, pp. 188–194

Area of quadrilateral as a determinant

I’ve written several posts about how determinants come up in geometry. These determinants often look similar, having columns related to coordinates and a column of ones. You can find several examples here along with an explanation for this pattern.

If you have three points z1, z2, and z3 in the complex plane, you can find the area of a triangle with these points as vertices

A(z_1, z_2, z_3) = \frac{i}{4} \, \left| \begin{matrix} z_1 & \bar{z}_1 & 1 \\ z_2 & \bar{z}_2 & 1 \\ z_3 & \bar{z}_3 & 1 \\ \end{matrix} \right|

You can read more about this here.

If you add the second column to the first, and subtract the first column from the second, you can get the equation for the area of a triangle in the real plane.

A = \frac{1}{2} \, \left| \begin{matrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \\ \end{matrix} \right|

Presumably the former equation was first derived from the latter, but the two are equivalent.

Now suppose you have a quadrilateral whose vertices are numbered in clockwise or counterclockwise order. Then the area is given by

A = \frac{1}{2} \, \left| \begin{matrix} x_1 & y_1 & 1 & 0\\ x_2 & y_2 & 1 & 1\\ x_3 & y_3 & 1 & 0\\ x_4 & y_4 & 1 & 1\\ \end{matrix} \right|

The proof is easy. If you expand the determinant by minors on the last column, you get the sum of two 3 × 3 determinants corresponding to the areas of the two triangles formed by splitting the quadrilateral along the diagonal connecting the first and third points.

A very accurate logarithm approximation

The previous post looked at an efficient way to approximate nth roots of fractions near 1 by hand. This post does the same for logarithms.

As before, we assume x = p/q and define

s = p + q
d = pq

Because we’re interested in values of x near 1, d is small, and small numbers are convenient to work with by hand.

In [1] Kellogg gives the approximation

log x ≈ 3(x² − 1)/((x+ 1)² + 2x) = 6ds/(3s² − d²)

So, for example, suppose we wanted to take the natural log of 7/8. then p = 7, q = 8, s = 15, and d = −1.

log x ≈ (6×15×(−1))/(3×225 − 1) = − 90/674 = − 45/337.

This approximation is good to six decimal places.

Kellogg claims that

This value of E [the natural logarithm], if q [what I’ve called x] be between .9 and 1.1, is true to the seventh decimal.

He then goes on to explain how to create an even more accurate approximation, and how to deal with larger values of x.

Here’s a plot verifying Kellogg’s claim.

Note the that scale of the plot is 10−8. As the flat spot in the middle suggests, you get even more decimal places for x closer to 1.

[1] Ansel N. Kellogg. Empirical formulæ; for Approximate Computation. The American Mathematical Monthly. February 1987, Vol. 4 No. 2, pp. 39–49.


Handy approximation for roots of fractions

This post will discuss a curious approximation with a curious history.


Let x be a number near 1, written as a fraction

x = p / q.

Then define s and d as the sum and difference of the numerator and denominator.

s = p + q
d = pq

Since we are assuming x is near 1, s is larger relative to d.

We have the following approximation for the nth root of x.

nx ≈ (ns + d) / (ns − d).

This comes from a paper written in 1897 [1]. At the time there was great interest in approximations that are easy to carry out by hand, and this formula would have been very convenient.

The approximation assumes x is near 1. If not, you can multiply by a number of known square root to make x near 1. There will be an example below.


Positive d

Let’s find the cube root of x = 112/97. We have n = 3, p = 112, q = 97, s = 209, and d = 15. The approximation tells says

3x ≈ 642/612 = 107/102 = 1.049019…

while the exact value is 1.049096… .

Negative d

The value of d might be negative, as when x = 31/32. If we want to find the fifth root, n = 5, p = 31, q = 32, s = 63, and d = −1.

5x ≈ 312/314= 156/157 = 0.9936708…

while the exact value is 0.9936703… .

x not near 1

If x is not near 1, you can make it near 1. For example, suppose you wanted to compute the square root of 3. Since 17² = 289, 300/289 is near 1. You could find the square root of 300/289, then multiply the result by 17/10 to get an approximation to √3.


The author refers to this approximation as Mercator’s formula, presumable Gerardus Mercator (1512–1594) [2] of map projection fame. A brief search did not find this formula because Mercator’s projection drowns out Mercator’s formula in search results.

The author says a proof is given in Hutton’s Tracts on Mathematics, Vol 1. I tracked down this reference, and the full title in all its 19th century charm is

Late Professor of Mathematics in the Royal Military Academy, Woolwich.

Hutton’s book looks interesting. You can find it on Besides bridges and gunpowder, the book has a lot to say about what we’d now call numerical analysis, such as ways to accelerate the convergence of series. Hutton’s version of the formula above does not require that x be near 1.

Related posts

[1] Ansel N. Kellogg. Empirical formulæ; for Approximate Computation. The American Mathematical Monthly. February 1897, Vol. 4 No. 2, pp. 39–49.

[2] Mercator’s projection is so familiar that we may not appreciate what a clever man he was. We can derive his projection now using calculus and logarithms, but Mercator developed it before Napier developed logarithms or Newton developed calculus. More on that here.

Uncovering names masked with stars

Sometimes I’ll see things like my name partially concealed as J*** C*** and think “a lot of good that does.”

Masking letters reveals more than people realize. For example, when you see that someone’s first name is four letters and begins with J, there’s about a 70% chance they’re male and there’s a 44% chance they’re named John. If you know this person is male, there’s a 63% chance they’re name is John.

If you know a man’s name has the form J***, his name isn’t necessarily John, though that’s the most likely possibility. There’s a 8% chance his name is Jack and a 6% chance his name is Joel.

All these numbers depend on the data set you’re looking at, but these are roughly accurate numbers for looking at any representative sample of American names.

Some names stand out more than others. If I tell you someone’s name is E********, there’s a 90% chance the name is Elizabeth.

If I tell you someone’s name is B*****, there’s a 77% chance this person is female, but it’s harder to guess which name is hers. The most likely possibility is Brenda, but there are several other possibilities that are fairly likely: Bonnie, Brooke, Brandy, etc.

We could go through a similar exercise with last names. You can probably guess who S**** is, though C***** is not so clear.

In short, replacing letters with stars doesn’t do much to conceal someone’s name. It usually doesn’t let you infer someone’s name with certainty, but it definitely improves your chances of guessing correctly. If you have a few good guesses as to someone’s name, and some good guesses on a handful of other attributes, together you have a good chance of identifying someone.

Related posts

Almost ASCII

I was working recently with a gigabyte file that had a dozen non-ASCII characters. This is very common. The ASCII character set is not quite big enough for a lot of tasks. Of course it’s completely inadequate if you’re writing Japanese, but it’s almost enough for documents written in English and a few other languages.

Efficient encoding

The world has standardized on Unicode as the way to represent characters across languages. Unicode currently has around 150,000 characters, far more than ASCII’s 128 characters.

But there’s a problem. Since 150,000 > 217, it takes more than two bytes (eight bits to a byte) to represent each of 150,000 things. If you use three bytes to represent each character, every file that is almost all ASCII will get three times bigger. If you limit yourself to the most frequently used Unicode characters, those that can be represented with two bytes (the “basic multilingual plane”), then you still double the size of files.

Enter UTF-8, a brilliant solution to this problem. The UTF-8 encoding of an ASCII file is an ASCII file. Pure ASCII files don’t get any larger when interpreted as UTF-8 encoded Unicode. Because 128 = 27, a byte representing an ASCII character has one unused bit. UTF-8 uses this unused bit to signal that what follows is not ASCII. I wrote about the full details here.

Unicode characters outside the ASCII range take 2, 3, or 4 bytes to represent. Inserting a small number of non-ASCII characters into a UTF-8 encoded Unicode file hardly changes the file’s size.


I mentioned at the top that I had a gigabyte file with a dozen non-ASCII characters. The command file -I reported the file encoding to be ASCII, because the vast majority of the file was ASCII. But the non-ASCII characters were not valid Unicode characters either.

These invalid Unicode characters would display as �, which is not actually in the file. The � is a valid Unicode character for representing an invalid Unicode character.

Some of the non-ASCII characters where extended ASCII (Windows 1252) characters, but if I remember correctly even that didn’t account for everything. Some of the odd characters were simply file corruption.

It’s kinda interesting how some tools are robust to these kinds of glitches and some are not. My first clue that something funny was going on was when sort refused to sort. I ran a Python script that helps me fix wonky text files and it threw an error:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x92 in position 222662: invalid start byte

This may seem like gibberish, but it actually says exactly what’s going on. There was an error interpreting the file as Unicode, because 0x92 is not a valid way to start a non-ASCII character in UTF-8.

The first bit of an ASCII character is 0. The first two bits of a non-ASCII character in UTF-8 are 11. But 9 is 1001 in binary, i.e. it starts with 10, and so the byte 0x92 is neither an ASCII character nor the beginning of a UTF-8 non-ASCII sequence of bytes. More details here.

Removing non-ASCII characters

For my application I could just remove the invalid characters using iconv with the -c option.

iconv -c -f CP1252 -t UTF-8 inputfile > outputfile

If you need to salvage troublesome characters then things are a little more complicated. The iconv utility will work if you know what the intended encoding was. If you don’t the intended encoding, you may need to do some detective work.

Related posts