Incircle and excircles

An earlier post looked at the nine-point circle of a triangle, a circle passing through nine special points associated with a triangle. Feuerbach’s theorem that says the nine point circle of a triangle is tangent to the incircle and three excircles of the same triangle.

The incircle of a triangle is the largest circle that can fit inside the triangle. When we add the incircle to the illustration from the post on the nine-point circle, it’s kinda hard to see the difference between the two circles. The nine-point circle is drawn in solid black and the incircle is drawn in dashed green.

If we extend the sides of the triangle, an excircle is a circle tangent to one side the original triangle and to the extensions of the other two sides.

 

Artemis lunar orbit

I haven’t been able to find technical details of the orbit of Artemis I, and some of what I’ve found has been contradictory, but here are some back-of-the-envelope calculations based on what I’ve pieced together. If someone sends me better information I can update this post.

Artemis is in a highly eccentric orbit around the moon, coming within 130 km (80 miles) of the moon’s surface at closest pass, and this orbit will take 14 days to complete. The weak link in this data is “14 days.” Surely this number has been rounded for public consumption.

If we assume Artemis is in a Keplerian orbit, i.e. we can ignore the effect of the Earth, then we can calculate the shape of the orbit using the information above. This assumption is questionable because as I understand it the reason for such an eccentric orbit has something to do with Lagrange points, which means the Earth’s gravity matters. Still, I image the effect of Earth’s gravity is a smaller source of error than the lack of accuracy in knowng the period.

Solving for axes

Artemis is orbiting the moon similarly to how the Mars Orbiter Mission orbited Mars. We can use Kepler’s equation for period T to solve for the semi-major axis a of the orbit.

T = 2π √(a³/μ)

Here μ = GM, with G being the gravitational constant and M being the mass of the moon. Now

G = 6.674 × 10-11 N m²/kg²

and

M = 7.3459 × 1022 kg.

If we assume T is 14 × 24 × 3600 seconds, then we get

a = 56,640 km

or 35,200 miles. The value of a is rough since the value of T is rough.

Assuming a Keplerian orbit, the moon is at one focus of the orbit, located a distance c from the center of the ellipse. If Artemis is 130 km from the surface of the moon at perilune, and the radius of the moon is 1737 km, then

c = a – (130 + 1737) km = 54,770 km

or 34,000 miles. The semi-minor axis b satisfies

b² = a² – c²

and so

b = 14,422 km

or 8962 miles.

Orbit shape

The eccentricity is c/a = 0.967. As I’ve written about before, eccentricity is hard to interpret intuitively. Aspect ratio is much easier to imaging than eccentricity, and the relation between the two is highly nonlinear.

Assuming everything above, here’s what the orbit would look like. The distances on the axes are in kilometers.

Artemis moon orbit

The orbit is highly eccentric: the center of the orbit is far from the foci of the orbit. But the aspect ratio is about 1/4. The orbit is only about 4 times wider in one direction than the other. It’s obviously an ellipse, but it’s not an extremely thin ellipse.

Lagrange points

In an earlier post I showed how to compute the Lagrange points for the Sun-Earth system. We can use the same equations for the Earth-Moon system.

The equations for the distance r from the Lagrange points L1 and L2 to the moon are

\frac{M_1}{(R\pm r)^2} \pm \frac{M_2}{r^2}=\left(\frac{M_1}{M_1+M_2}R \pm r\right)\frac{M_1+M_2}{R^3}

The equation for L1 corresponds to taking ± as – and the equation for L2 corresponds to taking ± as +. Here M1 and M2 are the masses of the Earth and Moon respectively, and R is the distance between the two bodies.

If we modify the code from the earlier post on Lagrange points we get

L1 = 54784 km
L2 = 60917 km

where L1 is on the near side of the moon and L2 on the far side. We estimated the semi-major axis a to be 56,640 km. This is about 3% larger than the distance from the moon to L1. So the orbit of Artemis passes near or through L1. This assumes the axis of the Artemis orbit is aligned with a line from the moon to Earth, which I believe is at least approximately correct.

The nine-point circle theorem

The nine-point circle theorem says that for any triangle, there is a circle passing through the following nine points:

  • The midpoints of each side.
  • The foot of the altitude to each side.
  • The midpoint between each vertex and the orthocenter.

The orthocenter is the place where the three altitudes intersect.

Illustration of Feuerbach's nine-point circle theorem

In the image above, the midpoints are red circles, the altitudes are blue lines, the feet are blue stars, and the midpoints between the vertices and the orthocenter are green squares.

 

The Möbius Inverse Monoid

I’ve written about Möbius transformations many times because they’re simple functions that nevertheless have interesting properties.

A Möbius transformation is a function f : ℂ → ℂ of the form

f(z) = (az + b)/(cz + d)

where adbc ≠ 0. One of the basic properties of Möbius transformations is that they form a group. Except that’s not quite right if you want to be completely rigorous.

The problem is that a Möbius transformation isn’t a map from (all of) ℂ to ℂ unless c = 0 (which implies d cannot be 0). The usual way to fix this is to add a point at infinity, which makes things much simpler. Now we can say that the Möbius transformations form a group of automorphisms on the Riemann sphere S².

But if you insist on working in the finite complex plane, i.e. the complex plane ℂ with no point at infinity added, each Möbius transformations is actually a partial function on ℂ because a point may be missing from the domain. As detailed in [1], you technically do not have a group but rather an inverse monoid. (See the previous post on using inverse semigroups to think about floating point partial functions.)

You can make Möbius transformations into a group by defining the product of the Möbius transformation f above with

g(z) = (Az + B) / (Cz + D)

to be

(aAz + bCz + aB + bD) / (Acz + Cdz + Bc + dD),

which is what you’d get if you computed the composition fg as functions, ignoring any difficulties with domains.

The Möbius inverse monoid is surprisingly complex. Things are simpler if you compactify the complex plane by adding a point at infinity, or if you gloss over the fine points of function domains.

Related posts

[1] Mark V. Lawson. The Möbius Inverse Monoid. Journal of Algebra. 200, 428–438 (1998).

Solving Laplace’s equation on a disk

Why care about solving Laplace’s equation

\Delta u = 0

on a disk?

Laplace’s equation is important in its own right—for example, it’s important in electrostatics—and understanding Laplace’s equation is a stepping stone to understanding many other PDEs.

Why care specifically about a disk? An obvious reason is that you might need to solve Laplace’s equation on a disk! But there are two less obvious reasons.

First, a disk can be mapped conformally to any simply connected proper open subset of the complex plane. And because conformal equivalence is transitive, two regions conformally equivalent to the disk are conformally equivalent to each other. For example, as I wrote about here, you can map a Mickey Mouse silhouette

Mickey Mouse

to and from the Batman logo

Batman logo

using conformal maps. In practice, you’d probably map Mickey Mouse to a disk, and compose that map with a map from the disk to Batman. The disk is a standard region, and so there are catalogs of conformal maps between the disk and other regions. And there are algorithms for computing maps between a standard region, such as the disk or half plane, and more general regions. You might be able to lookup a mapping from the disk to Mickey, but probably not to Batman.

In short, the disk is sort of the hub in a hub-and-spoke network of cataloged maps and algorithms.

Secondly, Laplace’s equation has an analytical solution on the disk. You can just write down the solution, and we will shortly. If it were easy to write down the solution on a triangle, that might be the hub, but instead its a disk.

Suppose u is a real-valued continuous function on the the boundary of the unit disk. Then u can be extended to a harmonic function, i.e. a solution to Laplace’s equation on the interior of the disk, via the Poisson integral formula:

u(z) = \frac{1}{2\pi} \int_0^{2\pi} u(e^{\i\theta})\, \text{Re}\left( \frac{e^{i\theta} + z}{e^{i\theta} - z}\right)\, d\theta

Or in terms of polar coordinates:

u(re^{i\varphi}) = \frac{1}{2\pi} \int_0^{2\pi} \frac{u(e^{\i\theta}) (1 - r^2)}{1 - 2r\cos(\theta-\varphi) + r^2}\, d\theta

Related posts

Posts on ellipses and elliptic integrals

I wrote a lot of posts on ellipses and related topics over the last couple months. Here’s a recap of the posts, organized into categories.

Basic geometry

More advanced geometry

Analysis

Design of experiments and design theory

Design of experiments is a branch of statistics, and design theory is a branch of combinatorics, and yet they overlap quite a bit.

It’s hard to say precisely what design theory is, but it’s consider with whether objects can be arranged in certain ways, and if so how many ways this can be done. Design theory is pure mathematics, but it is of interest to people working in ares of applied mathematics such as coding theory and statistics.

Here’s a recap of posts I’ve written recently related to design of experiments and design theory.

Design of Experiments

A few weeks ago I wrote about fractional factorial design. Then later I wrote about response surface models. Then a diagram from central composite design, a popular design in response surface methodology, was one the diagrams in a post I wrote about visually similar diagrams from separate areas of application.

I wrote two posts about pitfalls with A/B testing. One shows how play-the-winner sequential testing has the same problems as Condorcet’s voter paradox, with the order of the tests potentially determining the final winner. More seriously, A/B testing cannot detect interaction effects which may be critical.

ANSI and Military Standards

There are several civilian and military standards related to design of experiments. The first of these was MIL-STD-105. The US military has retired this standard in favor of the civilian standard ASQ/ANSI Z1.4 which is virtually identical.

Similarly, the US military standard MIL-STD-414 was replaced by the very similar civilian standard ASQ/ANSI Z1.9. This post looks at the mean-range method for estimating variation which these two standards reference.

Design Theory

I wrote a couple posts on Room squares, one on Room squares in general and one on Thomas Room’s original design now known as a Room square. Room squares are used in tournament designs.

I wrote a couple posts about Costas arrays, an introduction and a post on creating Costas arrays in Mathematica.

Latin Squares

Latin squares and Greco-Latin squares a part of design theory and a part of design of experiments. Here are several posts on Latin and Greco-Latin squares.

Repunits: primes and passwords

A repunit is a number whose base 10 representation consists entirely of 1s. The number consisting of n 1s is denoted Rn.

Repunit primes

A repunit prime is, unsurprisingly, a repunit number which is prime. The most obvious example is R2 = 11. Until recently the repunit numbers confirmed to be prime were Rn for n = 2, 19, 23, 317, 1031. Now the case for n = 49081 has been confirmed.

R_{49081} = \frac{10^{49081} - 1}{9} = \underbrace{\mbox{111 \ldots 1}}_{\mbox{{\normalsize 49,081 ones}} }

Here is the announcement. The date posted at the top of the page is from March this year, but I believe the announcement is new. Maybe the author edited an old page and didn’t update the shown date.

Repunit passwords

Incidentally, I noticed a lot of repunits when I wrote about bad passwords a few days ago. That post explored a list of commonly used but broken passwords. This is the list of passwords that password cracking software will try first. The numbers Rn are part of the list for the following values of n:

1–45, 47–49, 51, 53–54, 57–60, 62, 67, 70, 72, 77, 82, 84, 147

So 46 is the smallest value of n such that Rn is not on the list. I would not recommend using R46 as a password, but surprisingly there are a lot of worse choices.

The bad password file is sorted in terms of popularity, and you might expect repunits to appear in the file in order, i.e. shorter sequences first. That is sorta true overall. But you can see streaks in the plot below showing multiple runs where longer passwords are more common than shorter passwords.

Recent posts on solving equations

I’ve written several blog posts about equation solving recently. This post will summarize how in hindsight they fit together.

Trig equations

How to solve trig equations in general, and specifically how to solve equations involving quadratic polynomials in sine and cosine.

Polynomial equations

This weekend I wrote about a change of variables to “depress” a cubic equation, eliminating the quadratic term. This is a key step in solving a cubic equation. The idea can be extended to higher degree polynomials, and applied to differential equations.

Before that I wrote about how to tell whether a cubic or quartic equation has a double root. That post is also an introduction to resultants.

Numerically solving equations

First of all, there was a post on solving Kepler’s equation with Newton’s method, and especially with John Machin’s clever starting point.

Another post, also solving Kepler’s equation, showing how Newton’s method can be good, bad, or ugly.

And out there by itself, Weierstrass’ method for simultaneously searching for all roots of a polynomial.

Eliminating terms from higher-order differential equations

This post ties together two earlier posts: the previous post on a change of variable to remove a term from a polynomial, and an older post on a change of variable to remove a term from a differential equation. These are different applications of the same idea.

A linear differential equation can be viewed as a polynomial in the differential operator D applied to the function we’re solving for. More on this idea here. So it makes sense that a technique analogous to the technique used for “depressing” a polynomial could work similarly for differential equations.

In the differential equation post mentioned above, we started with the equation

y'' + p(x) y' + q(x) y = 0

and reduced it to

u'' + r(x) u = 0

using the change of variable

u(x) = \exp\left( \frac{1}{2} \int^x p(t)\,dt\right ) y(x)

So where did this change of variables come from? How might we generalize it to higher-order differential equations?

In the post on depressing a polynomial, we started with a polynomial

p(x) = ax^n + bx^{n-1} + cx^{n-2} + \cdots

and use the change of variables

x = t - \frac{b}{na}

to eliminate the xn-1 term. Let’s do something analogous for differential equations.

Let P be an nth degree polynomial and consider the differential equation

P(D) y = 0

We can turn this into a differential

Q(D) u = 0

where the polynomial

Q(D) = P\left(D - \frac{p}{n}\right)

has no term involving Dn-1 by solving

\left(D - \frac{p}{n}\right) u = D y

which leads to

u(x) = \exp\left( \frac{1}{n} \int^x p(t)\,dt\right ) y(x)

generalizing the result above for second order ODEs.