The Pluto-Charon orbit

The Moon doesn’t orbit the center of the Earth; it orbits the center of mass of the Earth-Moon system, which is inside the Earth. The distinction matters for designing satellite orbits, but it cannot be seen on a plot to scale. We’ll quantify this below.

Pluto’s moon Charon, however, is so large relative to Pluto and so close, that the center of mass of the Pluto-Charon system is outside of Pluto, and you can easily see this in a plot.

Plot of Pluto and Charon orbiting their barycenter

Imagine Pluto and Charon sitting on each end of a balanced seesaw. Pluto is a distance x1 to the left of the fulcrum, and Charon is a distance x2 to the right of the fulcrum. Let m1 be the mass of Pluto and m2 be the mass of Charon. Then

m1 x1 = m2 x2


x1 = m2 (x1 + x2) / (m1 + m2).

Now let’s put in some numbers.

m1 = 1.309 × 1022 kg
m2 = 1.62 × 1021 kg
x1 + x2 = 19,640 km

From this we find

x1 = (1.62 × 19640 / 14.71) km = 2163 km

and so the distance from the center of Pluto to the center of mass of the Pluto-Charon system is 2163 km. But the radius of Pluto is only 1190 km. So the center of mass of the Pluto-Charon system is about as far above the surface of Pluto as the center of Pluto is below the surface.

Comparison with the Earth-Moon system

It matters that the moon doesn’t exactly orbit the center of the Earth, but the difference between the center of the Earth and the center of mass of the Earth-Moon system is less dramatic. Let’s put in the numbers for the Earth and Moon.

m1 = 5.97 × 1024 kg
m2 = 7.346 × 1022 kg
x1 + x2 = 392,600 km

From this we find

x1 = (7.346 × 392,600 / 604) km = 4,775 km

The radius of Earth is 6,371 km, and so the center of mass of the Earth-Moon system is inside the Earth.

I made a plot analogous to the one above but for the Earth-Moon system. You could barely see the moon because it is so small relative to the size of its orbit. And you cannot see the difference between the center of the Earth and the barycenter of the Earth and Moon.

Tidal locking

Not only is Charon tidally locked with Pluto, as our moon is with Earth, but Pluto is tidally locked with Charon as well.

On Earth we only ever see one side of the moon. We never see the “dark side,” which is more accurately the “far side.” But someone standing on the moon would see Earth rotate.

Someone standing on Pluto would only ever see one side of Charon, and someone standing on Charon would only ever see one side of Pluto. Sputnik Planitia, the big heart-shaped feature on Pluto, is on the opposite side of Charon, so you could say Pluto is hiding its heart from its companion.

Image of Pluto featuring heart-shaped region

More orbital mechanics posts

Shape of moon orbit around sun

The earth’s orbit around the sun is nearly a circle, and the moon’s orbit around the earth is nearly a circle, but what is the shape of the moon’s orbit around the sun?

You might expect it to be bumpy, bending inward when the moon is between the earth and the sun and bending output when the moon is on the opposite side of the earth from the sun. But in fact the shape of the moon’s orbit around the sun is convex as proved in [1] and illustrated below.

If the moon orbited the earth much faster, say 10 times faster, at the same altitude, then we see that the orbit is indeed bumpy.

However, the nothing could orbit the earth 10x faster than the moon at the same distance as the moon. Orbital period determines altitude and vice versa.

A more realistic example would be a satellite in MEO (Medium Earth Orbit) like a GPS satellite. Such a satellite orbits the earth roughly twice a day. The path of a MEO satellite around the sun is not convex.

The plot above shows about one day of an MEO satellite’s orbit around the sun. Note that the vertical and horizontal scales are not the same; it would be hard to see anything but a flat line if the scales were the same because the satellite is far closer to the earth than the sun.

Here are the equations from [1]. Choose units so that the distance to the moon or satellite is 1 and let d be the distance from the planet to the sun. Let p be the number of times the moon or satellite orbits the planet as the planet orbits the sun (the number of sidereal periods).

x(θ) = d cos(θ) + cos(pθ)
y(θ) = d sin(θ) + sin(pθ)

This assumes both the planet’s orbit around the sun and the satellite’s orbit around the planet are circular, which is a good approximation in our examples.

[1] Noah Samuel Brannen. The Sun, the Moon, and Convexity. The College Mathematics Journal, Vol. 32, No. 4 (Sep., 2001), pp. 268-272

Can Brownian motion do work?

According to the latest episode of Eclectic Tech, Richard Feynman argued that Brownian motion cannot do work, but researchers at the University of Arkansas have demonstrated that it can by generating an electric current from Brownian motion in a sheet of graphene. You can read more in the physics journal article by the researchers.

Unfortunately this will be the last episode of Eclectic Tech. This last episode had several interesting stories. In addition to the story above, the episode discussed synchronizing clocks by observing cosmic ray events, a new bioinspired metamaterial, and NASA’s Inspire project.

More on Brownian motion

Infinite periodic table

All the chemical elements discovered or created so far follow a regular pattern in how their electrons are arranged: the nth shell contains up to 2n – 1 suborbitals that each contain up to two electrons. For a given atomic number, you can determine how its electrons are distributed into shells and suborbitals using the Aufbau principle.

The Aufbau principle is a good approximation, but not exact. For this post we’ll assume it is exact, and that everything in the preceding paragraph generalizes to an arbitrary number of shells and electrons.

Under those assumptions, what would the periodic table look like if more elements are discovered or created?

D. Weiss worked out the recurrence relations that the periods of the table satisfy and found their solutions.

The number of elements in nth period works out to

   P_n = \frac{(-1)^n(2n+3) + 2n^2 + 6n + 5}{4}

and the atomic numbers of the elements at the end of the nth period (the noble gases) are

Z_n = \frac{(-1)^n(3n+6) + 2n^3 + 12n^2 + 25n - 6}{12}

We can verify that these formulas give the right values for the actual periodic table as follows.

    >>> def p(n): return ((-1)**n*(2*n+3) + 2*n*n + 6*n +5)/4
    >>> def z(n): return ((-1)**n*(3*n+6) + 2*n**3 + 12*n**2 + 25*n - 6)/12
    >>> [p(n) for n in range(1, 8)]
    [2.0, 8.0, 8.0, 18.0, 18.0, 32.0, 32.0]
    >>> [z(n) for n in range(1, 8)]
    [2.0, 10.0, 18.0, 36.0, 54.0, 86.0, 118.0]

So, hypothetically, if there were an 8th row to the periodic table, it would contain 50 elements, and the last element of this row would have atomic number 168.


Chemical element abbreviation patterns

I’ve wondered occasionally about the patterns in how chemical elements are abbreviated. If you don’t know the abbreviation for an element, is there a simple algorithm that would let you narrow the range of possibilities or improve your odds at guessing?

Here’s a survey of how the elements are abbreviated.

Latin and German

The elements that have been known the longest often have abbreviations that are mnemonic in Latin.

  • Iron (Fe)
  • Sodium (Na)
  • Silver (Ag)
  • Tin (Sn)
  • Antimony (Sb)
  • Tungsten (W)
  • Gold (Au)
  • Mercury (Hg)
  • Lead (Pb)
  • Potassium (K)
  • Copper (Cu)

I included Tungsten in this section because it also has an abbreviation that is mnemonic in another language, in this case German.

Initial letter

The easiest abbreviations to remember are simply the first letters of the element names (in English).

  • Boron (B)
  • Carbon (C)
  • Fluorine (F)
  • Hydrogen (H)
  • Iodine (I)
  • Nitrogen (N)
  • Oxygen (O)
  • Phosphorus (P)
  • Sulfur (S)
  • Uranium (U)
  • Vanadium (V)
  • Yttrium (Y)

First two letters

The largest group of elements are those abbreviated by the first two letters of their name. When in doubt, guess the first two letters.

  • Actinium (Ac)
  • Aluminum (Al)
  • Americium (Am)
  • Argon (Ar)
  • Barium (Ba)
  • Beryllium (Be)
  • Bismuth (Bi)
  • Bromine (Br)
  • Calcium (Ca)
  • Cerium (Ce)
  • Chlorine (Cl)
  • Cobalt (Co)
  • Dysprosium (Dy)
  • Erbium (Er)
  • Europium (Eu)
  • Flerovium (Fl)
  • Francium (Fr)
  • Gallium (Ga)
  • Germanium (Ge)
  • Helium (He)
  • Holmium (Ho)
  • Indium (In)
  • Iridium (Ir)
  • Krypton (Kr)
  • Lanthanum (La)
  • Lithium (Li)
  • Lutetium (Lu)
  • Molybdenum (Mo)
  • Neon (Ne)
  • Nickel (Ni)
  • Nobelium (No)
  • Oganesson (Og)
  • Osmium (Os)
  • Polonium (Po)
  • Praseodymium (Pr)
  • Radium (Ra)
  • Rhodium (Rh)
  • Ruthenium (Ru)
  • Scandium (Sc)
  • Selenium (Se)
  • Silicon (Si)
  • Tantalum (Ta)
  • Tellurium (Te)
  • Thorium (Th)
  • Titanium (Ti)
  • Xenon (Xe)

Many of these elements use the first two letters to avoid a conflict with the first letter. For example, helium uses He because hydrogen already took H.

There are several elements that start with the same letter, and no element uses just the first letter. For example: actinium, aluminum, americium, and argon.

Xenon could have been X, or dysprosium could have been just D, but that’s not how it was done.

First letter and next consonant

The next largest group of elements are abbreviated by their first letter and the next consonant, skipping over a vowel.

  • Bohrium (Bh)
  • Cadmium (Cd)
  • Cesium (Cs)
  • Dubnium (Db)
  • Gadolinium (Gd)
  • Hafnium (Hf)
  • Hassium (Hs)
  • Livermorium (Lv)
  • Magnesium (Mg)
  • Manganese (Mn)
  • Meitnerium (Mt)
  • Neodymium (Nd)
  • Neptunium (Np)
  • Nihonium (Nh)
  • Niobium (Nb)
  • Rubidium (Rb)
  • Samarium (Sm)
  • Technetium (Tc)
  • Zinc (Zn)
  • Zirconium (Zr)

Many of these elements would cause a conflict if they had been abbreviated using one of the above rules. For example, cadmium could not be C because that’s carbon, and it could not be Ca because that’s calcium.

Initials of first two syllables

  • Astatine (At)
  • Berkelium (Bk)
  • Darmstadtium (Ds)
  • Einsteinium (Es)
  • Fermium (Fm)
  • Lawrencium (Lr)
  • Mendelevium (Md)
  • Moscovium (Mc)
  • Platinum (Pt)
  • Promethium (Pm)
  • Roentgenium (Rg)
  • Terbium (Tb)
  • Thallium (Tl)

Initials of first and third syllable

  • Californium (Cf)
  • Copernicium (Cn)
  • Palladium (Pd)
  • Rutherfordium (Rf)
  • Seaborgium (Sg)
  • Tennessine (Ts)
  • Ytterbium (Yb)

First and last letter

  • Curium (Cm)
  • Radon (Rn)
  • Thulium (Tm)


  • Arsenic (As)
  • Chromium (Cr)
  • Plutonium (Pu)
  • Protactinium (Pa)
  • Rhenium (Re)
  • Strontium (Sr)


Update: Here’s a visualization of the categories above.

Periodic table of element abbreviations

Key to the groups above:

  1. First letter
  2. First two letters
  3. First letter and next consonant
  4. Initials of first and second syllables
  5. Initials of first and third syllables
  6. First and last letter
  7. First letter and something else
  8. Historical

Related posts

Oscillations in RLC circuits

Electrical and mechanical oscillations satisfy analogous equations. This is the basis of using the word “analog” in electronics. You could study a mechanical system by building an analogous circuit and measuring that circuit in a lab.

Mass, dashpot, spring

Years ago I wrote a series of four posts about mechanical vibrations:

Everything in these posts maps over to electrical vibrations with a change of notation.

That series looked at the differential equation

m u'' + \gamma u' + k u = F \cos\omega t

where m is mass, γ is damping from a dashpot, and k is the stiffness of a spring.

Inductor, resistor, capacitor

Now we replace our mass, dashpot, and spring with an inductor, resistor, and capacitor.

Imagine a circuit with an L henry inductor, and R ohm resistor, and a C farad capacitor in series. Let Q(t) be the charge in coulombs over time and let E(t) be an applied voltage, i.e. an AC power source.

Charge formulation

One can use Kirchhoff’s law to derive

Here we have the correspondences

\begin{align*} u &\leftrightarrow Q \\ m &\leftrightarrow L \\ \gamma &\leftrightarrow R \\ k &\leftrightarrow 1/C \end{align*}

So charge is analogous to position, inductance is analogous to mass, resistance is analogous to damping, and capacitance is analogous to the reciprocal of stiffness.

The reciprocal of capacitance is called elastance, so we can say elastance is proportional to stiffness.

Current formulation

It’s more common to see the differential equation above written in terms of current I.

I = \frac{dQ}{dt}

If we take the derivative of both sides of

we get

LI'' + RI' + \frac{1}{C} I = E'

Natural frequency

With mechanical vibrations, as shown here, the natural frequency is

\omega_0 = \sqrt{\frac{k}{m}}

and with electrical oscillations this becomes

\omega_0 = \frac{1}{\sqrt{LC}}

Steady state

When a mechanical or electrical system is driven by sinusoidal forcing function, the system eventually settles down to a solution that is proportional to a phase shift of the driving function.

To be more explicit, the solution to the differential equation

m u'' + \gamma u' + k u = F \cos\omega t

has a transient component that decays exponentially and a steady state component proportional to cos(ωt-φ). The same is true of the equation

LI'' + RI' + \frac{1}{C} I = E'

The proportionality constant is conventionally denoted 1/Δ and so the steady state solution is

\frac{F}{\Delta} \cos(\omega t - \phi)

for the mechanical case and

\frac{T}{\Delta} \cos(\omega t - \phi)

for the electrical case.

The constant Δ satisfies

\Delta^2 = m^2(\omega_0^2 -\omega^2)^2 + \gamma^2 \omega^2

for the mechanical system and

\Delta^2 = L^2(\omega_0^2 -\omega^2)^2 + R^2 \omega^2

for the electrical system.

When the damping force γ or the resistance R is small, then the maximum amplitude occurs when the driving frequency ω is near the natural frequency ω0.

More on damped, driven oscillations here.

How is portable AM radio possible?

The length of antenna you need to receive a radio signal is proportional to the signal’s wavelength, typically 1/2 or 1/4 of the wavelength. Cell phones operate at gigahertz frequencies, and so the antennas are small enough to hide inside the phone.

But AM radio stations operate at much lower frequencies. For example, there’s a local station, KPRC, that broadcasts at 950 kHz, roughly one megahertz. That means the wavelength of their carrier is around 300 meters. An antenna as long as a quarter of a wavelength would be roughly as long as a football field, and yet people listen to AM on portable radios. How is that possible?

looking inside a portable radio

There are two things going on. First, transmitting is very different than receiving in terms of power, and hence in terms of the need for efficiency. People are not transmitting AM signals from portable radios.

Second, the electrical length of an antenna can be longer than its physical length, i.e. an antenna can function as if it were longer than it actually is. When you tune into a radio station, you’re not physically making your antenna longer or shorter, but you’re adjusting electronic components that make it behave as if you were making it longer or shorter. In the case of an AM radio, the electrical length is orders of magnitude more than the physical length. Electrical length and physical length are closer together for transmitting antennas.

Here’s what a friend of mine, Rick Troth, said when I asked him about AM antennas.

If you pop open the case of a portable AM radio, you’ll see a “loop stick”. That’s the AM antenna. (FM broadcast on most portables uses a telescoping antenna.) The loop is tuned by two things: a ferrite core and the tuning capacitor. The core makes the coiled wiring of the antenna resonate close to AM broadcast frequencies. The “multi gang” variable capacitor coupled with the coil forms an LC circuit, for finer tuning. (Other capacitors in the “gang” tune other parts of the radio.) The loop is small, but is tuned for frequencies from 530KHz to 1.7MHz.

Loops are not new. When I was a kid, I took apart so many radios. Most of the older (tube type, and AM only) radios had a loop inside the back panel. Quite different from the loop stick, but similar electrical properties.

Car antennas don’t match the wavelengths for AM broadcast. Never have. That’s a case where matching matters less for receivers. (Probably matters more for satellite frequencies because they’re so weak.) Car antennas, whether whip from decades ago or embedded in the glass, probably match FM broadcast. (About 28 inches per side of a dipole, or a 28 inch quarter wave vertical.) But again, it does matter a little less for receive than for transmit.

In the photo above, courtesy Rick, the AM antenna is the copper coil on the far right. The telescoping antenna outside the case extends to be much longer physically than the AM antenna, even though AM radio waves are two orders of magnitude longer than FM radio waves.

Solar declination

This post expands on a small part of the post Demystifying the Analemma by M. Tirado.

Apparent solar declination given δ by

δ = sin−1( sin(ε) sin(θ) )

where ε is axial tilt and θ is the angular position of a planet. See Tirado’s post for details. Here I want to unpack a couple things from the post. One is that that declination is approximately

δ = ε sin(θ),

the approximation being particular good for small ε. The other is that the more precise equation approaches a triangular wave as ε approaches a right angle.

Let’s start out with ε = 23.4° because that is the axial tilt of the Earth. The approximation above is a variation on the approximation

sin φ ≈ φ

for small φ when φ is measured in radians. More on that here.

An angle of 23.4° is 0.4084 radians. This is not particularly small, and yet the approximation above works well. The approximation above amounts to approximating sin−1(x) with x, and Taylor’s theorem tells the the error is about x³/6, which for x = sin(ε) is about 0.01. You can’t see the difference between the exact and approximate equations from looking at their graphs; the plot lines lie on top of each other.

Even for a much larger declination of 60° = 1.047 radians, the two curves are fairly close together. The approximation, in blue, slightly overestimates the exact value, in gold.

This plot was produced in Mathematica with

    ε = 60 Degree
    Plot[{ε Sin[θ] ], ArcSin[Sin[ε] Sin[θ]]}, {θ, 0, 2π}]

As ε gets larger, the curves start to separate. When ε = 90° the gold curve becomes exactly a triangular wave.

Update: Here’s a plot of the maximum approximation error as a function of ε.

Related posts

When do two-body systems have stable Lagrange points?

The previous post looked at two of the five Lagrange points of the Sun-Earth system. These points, L1 and L2, are located on either side of Earth along a line between the Earth and the Sun. The third Lagrange point, L3, is located along that same line, but on the opposite side of the Sun.

L1, L2, and L3 are unstable, but stable enough on a short time scale to be useful places to position probes. Lagrange points are in the news this week because the James Webb Space Telescope (JWST), launched on Christmas Day, is headed toward L2 at the moment.

The remaining Lagrange points, L4 and L5, are stable. These points are essentially in Earth’s orbit around the Sun, 60° ahead and 60° behind Earth. To put it another way, they’re located where Earth will be in two months and where Earth was two months ago. The points L3, L4, and L5 form an equilateral triangle centered at the Sun.

Lagrange points more generally

Lagrange points are not unique to the Sun and Earth, but also holds for other systems as well. You have two bodies m1 and m2 , such as a star and a planet or a planet and a moon, and a third body, such as the JWST, with mass so much less than the other two that its mass is negligible compared to the other two bodies.

The L1, L2, and L3 points are always unstable, meaning that an object placed there will eventually leave, but the L4 and L5 points are stable, provided one of the bodies is sufficiently less massive than the other. This post will explore just how much less massive.

Mass ratio requirement

Michael Spivak [1] devotes a section of his physics book to the Trojan asteroids, asteroids that orbit the Sun at the L4 and L5 Lagrange points of a Sun-planet system. Most Trojan asteroids are part of the Sun-Jupiter system, but other planets have Trojan asteroids as well. The Earth has a couple Trojan asteroids of its own.

Spivak shows that in order for L4 and L5 to be stable, the masses of the two objects must satisfy

(m1m2) / (m1 + m2) > k

where m1 is the mass of the more massive body, m2 is the mass of the less massive body, and

k = √(23/27).

If we define r to be the ratio of the smaller mass to the larger mass,

r = m2 / m1,

then by dividing by m1 we see that equivalently we must have

(1 − r) / (1 + r) > k.

We run into the function (1 − z)/(1 + z) yet again. As we’ve pointed out before, this function is its own inverse, and so the solution for r is that

r < (1 − k) / (1 + k) = 0.04006…

In other words, the more massive body must be at least 25 times more massive than the smaller body.

The Sun is over 1000 times more massive than Jupiter, so Jupiter’s L4 and L5 Lagrange points with respect to the Sun are stable. The Earth is over 80 times more massive than the Moon, so the L4 and L5 points of the Earth-Moon system are stable as well.

Pluto has only 8 times the mass of its moon Charon, so the L4 and L5 points of the Pluto-Charon system would not be stable.

Related posts

[1] Michael Spivak: Physics for Mathematicians: Mechanics I. Addendum 10A.

Fraud, Sloppiness, and Statistics

A few years ago the scientific community suddenly realized that a lot of scientific papers were wrong. I imagine a lot of people knew this all along, but suddenly it became a topic of discussion and people realized the problem was bigger than imagined.

The layman’s first response was “Are you saying scientists are making stuff up?” and the response from the scientific community was “No, that’s not what we’re saying. There are subtle reasons why an honest scientist can come to the wrong conclusion.” In other words, don’t worry about fraud. It’s something else.

Well, if it’s not fraud, what is it? The most common explanations are sloppiness and poor statistical practice.


The sloppiness hypothesis says that irreproducible results may be the result of errors. Or maybe the results are essentially correct, but the analysis is not reported in sufficient detail for someone to verify it. I first wrote about this in 2008.

While I was working for MD Anderson Cancer Center, a couple of my colleagues dug into irreproducible papers and tried to reverse engineer the mistakes and omissions. For example, this post mentioned some of the erroneous probability formulas that were implicitly used in journal articles.

Bad statistics

The bad statistics hypothesis was championed by John Ioannidis in his now-famous paper Most published research findings are false. The article could have been titled “Why most research findings will be false, even if everyone is honest and careful.” For a cartoon version of Ioannidis’s argument, see xkcd’s explanation of why jelly beans cause acne. In a nutshell, the widespread use of p-values makes it too easy to find spurious but publishable results.

Ioannidis explained that in theory most results could be false, based on statistical theory, but potentially things could be better in practice than in theory. Unfortunately they are not. Numerous studies have tried to empirically estimate [1] what proportion of papers cannot be reproduced. The estimate depends on context, but it’s high.

For example, ScienceNews reported this week on an attempt to reproduce 193 experiments in cancer biology. Only 50 of the experiments could be reproduced, and of those, the reported effects were found to be 85% smaller than initially reported. Here’s the full report.


This post started out by putting fraud aside. In a sort of a scientific version of Halnon’s razor, we agreed not to attribute to fraud what could be adequately explained by sloppiness and bad statistics. But what about fraud?

There was a spectacular case of fraud in The Lancet last year.

article summary with RETRACTED stamped on top in red

The article was published May 22, 2020 and retracted on June 4, 2020. I forget the details, but the fraud was egregious. For example, if I remember correctly, the study claimed to have data on more than 100% of the population in some regions. Peer review didn’t catch the fraud but journalists did.

Who knows how common fraud is? I see articles occasionally that try to estimate it. But exposing fraud takes a lot of work, and it does not advance your career.

I said above that my former colleagues were good at reverse engineering errors. They also ended up exposing fraud. They started out trying to figure out how Anil Potti could have come to the results he did, and finally determined that he could not have. This ended up being reported in The Economist and on 60 Minutes.

As Nick Brown recently said on Twitter,

At some point I think we’re going to have to accept that “widespread fraud” is both a plausible and parsimonious explanation for the huge number of failed replications we see across multiple scientific disciplines.

That’s a hard statement to accept, but that doesn’t mean it’s wrong.

[1] If an attempt to reproduce a study fails, how do we know which one was right? The second study could be wrong, but it’s probably not. Verification is generally easier than discovery. The original authors probably explored multiple hypotheses looking for a publishable result, while the replicators tested precisely the published hypothesis.

Andrew Gelman suggested a thought experiment. When a large follow-up study fails to replicate a smaller initial study, image if the timeline were reversed. If someone ran a small study and came up with a different result than a previous large study, which study would have more credibility?