When do two-body systems have stable Lagrange points?

The previous post looked at two of the five Lagrange points of the Sun-Earth system. These points, L1 and L2, are located on either side of Earth along a line between the Earth and the Sun. The third Lagrange point, L3, is located along that same line, but on the opposite side of the Sun.

L1, L2, and L3 are unstable, but stable enough on a short time scale to be useful places to position probes. Lagrange points are in the news this week because the James Webb Space Telescope (JWST), launched on Christmas Day, is headed toward L2 at the moment.

The remaining Lagrange points, L4 and L5, are stable. These points are essentially in Earth’s orbit around the Sun, 60° ahead and 60° behind Earth. To put it another way, they’re located where Earth will be in two months and where Earth was two months ago. The points L3, L4, and L5 form an equilateral triangle centered at the Sun.

Lagrange points more generally

Lagrange points are not unique to the Sun and Earth, but also holds for other systems as well. You have two bodies m1 and m2 , such as a star and a planet or a planet and a moon, and a third body, such as the JWST, with mass so much less than the other two that its mass is negligible compared to the other two bodies.

The L1, L2, and L3 points are always unstable, meaning that an object placed there will eventually leave, but the L4 and L5 points are stable, provided one of the bodies is sufficiently less massive than the other. This post will explore just how much less massive.

Mass ratio requirement

Michael Spivak [1] devotes a section of his physics book to the Trojan asteroids, asteroids that orbit the Sun at the L4 and L5 Lagrange points of a Sun-planet system. Most Trojan asteroids are part of the Sun-Jupiter system, but other planets have Trojan asteroids as well. The Earth has a couple Trojan asteroids of its own.

Spivak shows that in order for L4 and L5 to be stable, the masses of the two objects must satisfy

(m1m2) / (m1 + m2) > k

where m1 is the mass of the more massive body, m2 is the mass of the less massive body, and

k = √(23/27).

If we define r to be the ratio of the smaller mass to the larger mass,

r = m2 / m1,

then by dividing by m1 we see that equivalently we must have

(1 – r) / (1 + r) > k.

We run into the function (1 – z)/(1 + z) yet again. As we’ve pointed out before, this function is its own inverse, and so the solution for r is that

r < (1 – k) / (1 + k) = 0.04006…

In other words, the more massive body must be at least 25 times more massive than the smaller body.

The Sun is over 1000 times more massive than Jupiter, so Jupiter’s L4 and L5 Lagrange points with respect to the Sun are stable. The Earth is over 80 times more massive than the Moon, so the L4 and L5 points of the Earth-Moon system are stable as well.

Pluto has only 8 times the mass of its moon Charon, so the L4 and L5 points of the Pluto-Charon system would not be stable.

Related posts

[1] Michael Spivak: Physics for Mathematicians: Mechanics I. Addendum 10A.

Fraud, Sloppiness, and Statistics

A few years ago the scientific community suddenly realized that a lot of scientific papers were wrong. I imagine a lot of people knew this all along, but suddenly it became a topic of discussion and people realized the problem was bigger than imagined.

The layman’s first response was “Are you saying scientists are making stuff up?” and the response from the scientific community was “No, that’s not what we’re saying. There are subtle reasons why an honest scientist can come to the wrong conclusion.” In other words, don’t worry about fraud. It’s something else.

Well, if it’s not fraud, what is it? The most common explanations are sloppiness and poor statistical practice.

Sloppiness

The sloppiness hypothesis says that irreproducible results may be the result of errors. Or maybe the results are essentially correct, but the analysis is not reported in sufficient detail for someone to verify it. I first wrote about this in 2008.

While I was working for MD Anderson Cancer Center, a couple of my colleagues dug into irreproducible papers and tried to reverse engineer the mistakes and omissions. For example, this post mentioned some of the erroneous probability formulas that were implicitly used in journal articles.

Bad statistics

The bad statistics hypothesis was championed by John Ioannidis in his now-famous paper Most published research findings are false. The article could have been titled “Why most research findings will be false, even if everyone is honest and careful.” For a cartoon version of Ioannidis’s argument, see xkcd’s explanation of why jelly beans cause acne. In a nutshell, the widespread use of p-values makes it too easy to find spurious but publishable results.

Ioannidis explained that in theory most results could be false, based on statistical theory, but potentially things could be better in practice than in theory. Unfortunately they are not. Numerous studies have tried to empirically estimate [1] what proportion of papers cannot be reproduced. The estimate depends on context, but it’s high.

For example, ScienceNews reported this week on an attempt to reproduce 193 experiments in cancer biology. Only 50 of the experiments could be reproduced, and of those, the reported effects were found to be 85% smaller than initially reported. Here’s the full report.

Fraud

This post started out by putting fraud aside. In a sort of a scientific version of Halnon’s razor, we agreed not to attribute to fraud what could be adequately explained by sloppiness and bad statistics. But what about fraud?

There was a spectacular case of fraud in The Lancet last year.

article summary with RETRACTED stamped on top in red

The article was published May 22, 2020 and retracted on June 4, 2020. I forget the details, but the fraud was egregious. For example, if I remember correctly, the study claimed to have data on more than 100% of the population in some regions. Peer review didn’t catch the fraud but journalists did.

Who knows how common fraud is? I see articles occasionally that try to estimate it. But exposing fraud takes a lot of work, and it does not advance your career.

I said above that my former colleagues were good at reverse engineering errors. They also ended up exposing fraud. They started out trying to figure out how Anil Potti could have come to the results he did, and finally determined that he could not have. This ended up being reported in The Economist and on 60 Minutes.

As Nick Brown recently said on Twitter,

At some point I think we’re going to have to accept that “widespread fraud” is both a plausible and parsimonious explanation for the huge number of failed replications we see across multiple scientific disciplines.

That’s a hard statement to accept, but that doesn’t mean it’s wrong.

[1] If an attempt to reproduce a study fails, how do we know which one was right? The second study could be wrong, but it’s probably not. Verification is generally easier than discovery. The original authors probably explored multiple hypotheses looking for a publishable result, while the replicators tested precisely the published hypothesis.

Andrew Gelman suggested a thought experiment. When a large follow-up study fails to replicate a smaller initial study, image if the timeline were reversed. If someone ran a small study and came up with a different result than a previous large study, which study would have more credibility?

Aquinas on epicycles

C. S. Lewis quotes Thomas Aquinas in The Discarded Image:

In astronomy an account is given of eccentricities and epicycles on the ground that if their assumption is made the sensible appearances as regards celestial motion can be saved. But this is not a strict proof since for all we know they could also be saved by some different assumption.

Time dilation in SF and GPS

I’m reading Voyage to Alpha Centauri and ran into a question about relativity. The book says in one place that their ship is moving a 56.7% of the speed of light, and in another place it says that time moves about 20% slower for them relative to folks on Earth. Are those two statements consistent?

It wouldn’t bother me if they weren’t consistent. I ordinarily wouldn’t bother to check such things. But I remember looking into time dilation before and being surprised how little effect velocity has until you get very close to the speed of light. I couldn’t decide whether the relativistic effect in the novel sounded too large or too small.

If a stationary observer is watching a clock moving at velocity v, during one second of the observer’s time,

\sqrt{1 - \frac{v^2}{c^2}}

seconds will have elapsed on the moving clock.

Even at 20% of the speed of light, the moving clock only appears to slow down by about 2%.

If, as in the novel, a spaceship is moving at 56.7% of the speed of light, then for every second an Earth-bound observer experiences, someone on the ship will experience √(1 – 0.567²) = 0.82 seconds. So time would run about 20% slower on the ship, as the novel says.

The author must have either done this calculation or asked someone to do it for him. I had a science fiction author ask me for something a while back, though I can’t remember right now what it was.

Small velocities

You can expand the expression above in a Taylor series to get

\sqrt{1 - \frac{v^2}{c^2}} = 1 -\frac{v^2}{2c^2} -\frac{v^4}{8c^4} + \cdots

and so velocities much smaller than the speed of light, the effect of time dilation is 0.5 v²/c², a quadratic function of velocity. You can use this to confirm the comment above that when v/c = 0.2, the effect of time dilation is about 2%.

GPS satellites travel at about 14,000 km/hour, and so the effect of time dilation is on the order of 1 part in 1010. This would seem insignificant, except it amounts to milliseconds per year, and so it does make a practical difference.

For something moving 100 times slower, like a car, time dilation would be 10,000 times smaller. So time in a car driving at 90 miles per hour slows down by one part in 1014 relative to a stationary observer.

Tape measures

The math in the section above is essentially the same as the math in the post explaining why it doesn’t matter much if a tape measure does run exactly straight when measuring a large distance. They both expand an expression derived from the Pythagorean theorem in a Taylor series.

Martian gravity

There is a lot of talk about Mars right now, and understandably so. The flight of Ingenuity today was awesome. As Daniel Oberhaus pointed out on Twitter,

… the atmosphere on the surface of Mars is so thin that it’s the equivalent of flying at ~100k feet on Earth.

No rotorcraft, piloted or uncrewed, has ever broken 50k on Earth.

When I heard that gravity on Mars is about 1/3 of that of Earth, that sounded too small to me. My thinking was that gravity on the moon is about 1/6 of Earth, and Mars is much bigger than the moon, so gravity on Mars ought to be closer to gravity on Earth

Where I went wrong was my assessment that Mars is “much” bigger than the moon. The radius of Mars is only about twice that of our moon; I would have guessed higher.

Surface gravity is proportional to mass over radius squared. If the density of two balls is the same, then mass goes up like radius cubed, and so gravity would increase in proportion to radius. The density of Mars and the moon are about the same, and so the object with twice the radius has about twice the surface gravity.

Let’s put some numbers to things. We’ll let m and r stand for mass and mean radius. And we’ll let subscripts E, M, and L stand for Earth, Mars, and Luna (our moon).

rE = 6371 km
rM = 3390 km
rL = 1738 km

The radius of Mars is approximately the geometric mean of the radii of the Earth and the moon.

(rE rL)½ = 3327 ≈ 3390 = rM

To calculate surface gravity we’ll need masses [1].

mE = 5.972 × 1024 kg
mM = 6.417 × 1023 kg
mL = 7.342 × 1022 kg

The mass of Mars is also approximately the geometric mean of the masses of the Earth and the moon [2].

(mE mL)½ = 6.6 × 1023 ≈ 6.4× 1023 = mM

The ratio of Martian gravity to lunar gravity is

(mM / rM²) / (mL / rL²) = 2.2968

The ratio of Earth gravity to Martin gravity is

(mE / rE²) / (mM / rM²) = 2.6140

so saying surface gravity on Mars is a third of that on Earth underestimates gravity on Mars a little but not too much.

More Mars-related posts

[1] I’m assuming mass is uniformly distributed for each body. It’s not exactly, and this makes a difference if you’re planning satellite trajectories, but it doesn’t make much of a difference here.

[2] This is not a consequence of the relationship between the radii because the bodies have different densities. The moon and Mars have similar densities, but both are significantly less dense than Earth.

Coulomb’s constant

Richard Feynman said nearly everything is really interesting if you go into it deeply enough. In that spirit I’m going to dig into the units on Coulomb’s constant. This turns out to be an interesting rabbit trail.

Coulomb’s law says that the force between two charged particles is proportional to the product of their charges and inversely proportional to the distance between them. In symbols,

F = k_e \frac{q_1\, q_2}{r^2}

The proportionality constant, the ke term, is known as Coulomb’s constant. Continue reading

Herd immunity countdown

A few weeks ago I wrote a post giving a back-of-the-envelope calculation regarding when the US would reach herd immunity to SARS-COV-2. As I pointed out repeatedly, this is only a rough estimate because it makes numerous simplifying assumptions and is based on numbers that have a lot of uncertainty around them. See that post for details.

That post was based on the assumption that 26 million Americans had been infected with the virus. I’ve heard other estimates of 50 million or 100 million.

Update: The CDC estimates that 83 million Americans were infected in 2020 alone. I don’t see that they’ve issued any updates to this figure, but everyone who has been infected in 2021 brings us closer to herd immunity.

The post was also based on the assumption that we’re vaccinating 1.3 million per day. A more recent estimate is 1.8 million per day. (Update: We’re at 2.7 million per day as of March 30, 2021.) So maybe my estimate was pessimistic. On the other hand, the estimate for the number of people with pre-existing immunity that I used may have been optimistic.

Because there is so much we don’t know, and because numbers are frequently being updated, I’ve written a little Python code to make all the assumptions explicit and easy to update. According to this calculation, we’re 45 days from herd immunity. (Update: We could be at herd immunity any time now, depending on how many people had pre-existing immunity.)

As I pointed out before, herd immunity is not a magical cutoff with an agreed-upon definition. I’m using a definition that was suggested a year ago. Viruses never [1] completely go away, so any cutoff is arbitrary.

Here’s the code. It’s Python, but you it would be trivial to port to any programming language. Just remove the underscores as thousands separators if your language doesn’t support them and change the comment marker if necessary.

US_population         = 330_000_000
num_vaccinated        =  50_500_000 # As of March 30, 2021
num_infected          =  83_100_000 # As of January 1, 2021
vaccine_efficacy      = 0.9
herd_immunity_portion = 0.70

# Some portion of the population had immunity to SARS-COV-2
# before the pandemic. I've seen estimates from 10% up to 60%.
portion_pre_immune = 0.30
num_pre_immune = portion_pre_immune*US_population

# Adjust for vaccines given to people who are already immune.
portion_at_risk = 1.0 - (num_pre_immune + num_infected)/US_population

num_new_vaccine_immune = num_vaccinated*vaccine_efficacy*portion_at_risk

# Number immune at present
num_immune = num_pre_immune + num_infected + num_new_vaccine_immune
herd_immunity_target = herd_immunity_portion*US_population

num_needed = herd_immunity_target - num_immune

num_vaccines_per_day = 2_700_000 # As of March 30, 2021
num_new_immune_per_day = num_vaccines_per_day*portion_at_risk*vaccine_efficacy

days_to_herd_immunity = num_needed / num_new_immune_per_day

print(days_to_herd_immunity)

[1] One human virus has been eliminated. Smallpox was eradicated two centuries after the first modern vaccine.

Solving for neck length

A few days ago I wrote about my experiment with a wine bottle and a beer bottle. I blew across the empty bottles and measured the resulting pitch, then compared the result to the pitch you would get in theory if the bottle were a Helmholtz resonator. See the previous post for details.

Tonight I repeated my experiment with an empty water bottle. But I ran into a difficulty immediately: where would you say the neck ends?

water bottle

An ideal Helmholtz resonator is a cylinder on top of a larger sphere. My water bottle is basically a cone on top of a cylinder.

So instead of measuring the neck length L and seeing what pitch was predicted with the formula from the earlier post

f = \frac{v}{2\pi} \sqrt{\frac{A}{LV}}

I decided to solve for L and see what neck measurement would be consistent with the Helmholtz resonator approximation. The pitch f was 172 Hz, the neck of the bottle is one inch wide, and the volume is half a liter. This implies L is 10 cm, which is a little less than the height of the conical part of the bottle.

Herd immunity on the back of an envelope

This post presents a back-of-the-envelope calculation regarding COVID herd immunity in the US. Every input to the calculation is only roughly known, and I’m going to make simplifying assumptions left and right. So take this all with a grain of salt.

According to a recent article, about 26 million Americans have been vaccinated against COVID, about 26 million Americans have been infected, and 1.34 million a day are being vaccinated, all as of February 1, 2021.

Somewhere around half the US population was immune to SARS-COV-2 before the pandemic began, due to immunity acquired from previous coronavirus exposure. The proportion isn’t known accurately, but has been estimated as somewhere between 40 and 60 percent.

Let’s say that as of February 1, that 184 million Americans had immunity, either through pre-existing immunity, infection, or vaccination. There is some overlap between the three categories, but we’re taking the lowest estimate of pre-existing immunity, so maybe it sorta balances out.

The vaccines are said to be 90% effective. That’s probably optimistic—treatments often don’t perform as well in the wild as they do in clinical trials—but let’s assume 90% anyway. Furthermore, let’s assume that half the people being vaccinated already have immunity, due to pre-existing immunity or infection.

Then the number of people gaining immunity each day is 0.5*0.9*1,340,000, which is about 600,000 per day. This assumes nobody develops immunity through infection from here on out, though of course some will.

There’s no consensus on how much of the population needs to have immunity before you have herd immunity, but I’ve seen numbers like 70% tossed around, so let’s say 70%.

We assumed we had 184 M with immunity on February 1, and we need 231 M (70% of a US population of 330M) to have herd immunity, so we need 47 M more people. If we’re gaining 600,000 per day through vaccination, this would take 78 days from February 1, which would be April 20.

So, the bottom line of this very crude calculation is that we should have herd immunity by the end of April.

I’ve pointed out several caveats. There are more, but I’ll only mention one, and that is that herd immunity is not an objective state. Viruses never completely go away; only one human virus—smallpox—has ever been eradicated, and that took two centuries after the development of a vaccine.

Every number in this post is arguable, and so the result should be taken with a grain of salt, as I said from the beginning. Certainly you shouldn’t put April 20 on your calendar as the day the pandemic is over. But this calculation does suggest that we should see a substantial drop in infections long before most of the population has been vaccinated.

Update: A few things have changed since this was written. For one thing, we’re vaccinating more people per day. See an update post with code you can update (or just carry out by hand) as numbers change.

More COVID posts

Good news from Pfizer and Moderna

Both Pfizer and Moderna have announced recently that their SARS-COV2 vaccine candidates reduce the rate of infection by over 90% in the active group compared to the control (placebo) group.

That’s great news. The vaccines may turn out to be less than 90% effective when all is said and done, but even so they’re likely to be far more effective than expected.

But there’s other good news that might be overlooked: the subjects in the control groups did well too, though not as well as in the active groups.

The infection rate was around 0.4% in the Pfizer control group and around 0.6% in the Moderna control group.

There were 11 severe cases of COVID in the Moderna trial, out of 30,000 subjects, all in the control group.

There were 0 severe cases of COVID in the Pfizer trial in either group, out of 43,000 subjects.