Hohmann transfer orbit

How does a spacecraft orbiting a planet move from one circular orbit to another? It can’t just change lanes like a car going around a racetrack because speed and altitude cannot be changed independently.

The most energy-efficient way to move between circular orbits is the Hohmann transfer orbit [1]. The Hohmann orbit is an idealization, but it approximates maneuvers actually done in practice.

The Hohmann transfer requires applying thrust twice: once to leave the first circular orbit into the elliptical orbit, and once again to leave the elliptical orbit for the new circular orbit.

Hohmann transfer orbit

Suppose we’re in the orbit represented by the inner blue circle above and we want to move to the outer green circle. We apply our first instantaneous burst of thrust, indicated by the inner ×, and that puts us into the orange elliptical orbit.

(We can’t move faster in our current orbit without continually applying trust because velocity determines altitude. The new orbit will pass through the point at which we applied the thrust, and so our new orbit cannot be a circle because distinct concentric circles don’t intersect.)

The point at which we first apply thrust will be the point of the new orbit closest the planet, the point with maximum kinetic energy. The point furthest from the planet, the point with maximum potential energy, will occur 180° later on the opposite side. The first burst of thrust is calculated so that the maximum altitude of the resulting elliptical orbit is the desired altitude of the new circular orbit.

Once the elliptical orbit is at its maximum distance from the planet, marked by the outer ×, we apply the second thrust.  The amount of thrust is whatever it needs to be in order to maintain a circular orbit at the new altitude. The second half of the elliptical orbit, indicated by the dashed orange curve, is not taken; it’s only drawn to show the orbit we would stay on if we didn’t apply the second thrust.

So in summary, we use one burst of thrust to enter an elliptic orbit, and one more burst of thrust to leave that elliptical orbit for the new circular orbit. There are ways to move between circular orbits more quickly, but they require more fuel.

The same principles work in reverse, and so you could also use a Hohmann transfer to descend from a higher orbit to a lower one. You would apply your thrust opposite direction of motion.

There are several idealizations to the Hohmann transfer orbit. The model assume orbits are planar, that the initial orbit and the final orbit are circular, and that the two burns are each instantaneous.

The Hohmann transfer also assumes that the mass of the spacecraft is negligible compared to the planet. This would apply, for example, to a typical communication satellite, but perhaps not to a Death Star.

More orbital mechanics posts

[1] If you’re moving from one orbit to another at 12 times the radius, then the bi-elliptic orbit maneuver would use less fuel. Instead of taking half of an elliptical orbit to make the transfer, it fires thrusters three times, using half each of two different elliptical orbits to reach the desired circular orbit.

Nuclear Tornadoes

Tornado with lightning

I’ve recently started reading One Giant Leap, a book about the Apollo missions. In a section on nuclear testing, the book describes what was believed to be a connection between weapon tests and tornadoes.

In 1953 and 1954 there were all-time record outbreaks of tornadoes across the US, accompanied by fierce thunderstorms. The belief that the tornadoes were caused by the atomic testing in Nevada was so common among the public that Gallup conducted a poll on the topic, which showed that 29% of Americans thought that A-bombs were causing tornadoes many states away; 20% couldn’t say whether they were or not. Members of Congress required the military to provide data on any connection.

It wasn’t unreasonable at the time to speculate that there could be a connection between nuclear testing and tornadoes. No one knew much about the effects of nuclear explosions or about the causes of tornadoes in 1954. But here’s the part that was preventable:

In fact it turned out there weren’t really more tornadoes in those years; it was just that tornado reporting and tracking had gotten much more effective and thorough.

Once the public got the impression that tornadoes were becoming more frequent, confirmation bias would firm up that impression with each tornado citing.

I imagine someone knew this in 1954. Someone must have looked at the data and said “Hey, wait a minute.” This person was probably ridiculed by the few who even listened.

Update: I found a newspaper article cited in One Giant Leap: “A-Bomb Link to Tornadoes is Discounted,” Washington Post. November 18, 1954. The article quoted D. Lee Harris of the United States Weather Bureau saying “Improvement in tornado reporting is largely responsible for the great increase in total number of the storms reported in recent years.”

The Gallup poll was prior to the article cited. It would be interesting if there had been a follow-up poll to see how much impact Harris’ statement had on public opinion.

Expected length of longest common DNA substrings

If we have two unrelated sequences of DNA, how long would we expect the longest common substring of the two sequences to be? Since DNA sequences come from a very small alphabet, just four letters, any two sequences of moderate length will have some common substring, but how long should we expect it to be?

Motivating problem

There is an ongoing controversy over whether the SARS-CoV-2 virus contains segments of the HIV virus. The virologist who won the Nobel Prize for discovering the HIV virus, Luc Montagnier, claims it does, and other experts claim it does not.

I’m not qualified to have an opinion on this dispute, and do not wish to discuss the dispute per se. But it brought to mind a problem I’d like to say something about.

This post will look at a simplified version of the problem posed by the controversy over the viruses that cause COVID-19 and AIDS. Namely, given two random DNA sequences, how long would you expect their longest common substring to be?

I’m not taking any biology into account here other than using an alphabet of four letters. And I’m only looking at exact substring matches, not allowing for any insertions or deletions.

Substrings and subsequences

There are two similar problems in computer science that could be confused: the longest common substring problem and the longest common subsequence problem. The former looks for uninterrupted matches and the latter allows for interruptions in matches in either input.

For example, “banana” is a subsequence of “bandana” but not a substring. The longest common substring of “banana” and “anastasia” is “ana” but the longest common subsequence is “anaa.” Note that the latter is not a substring of either word, but a subsequence of both.

Here we are concerned with substrings, not subsequences. Regarding the biological problem of comparing DNA sequences, it would seem that a substring is a strong form of similarity, but maybe too demanding since it allows no insertions. Also, the computer science idea of a subsequence might be too general for biology since it allows arbitrary insertions. A biologically meaningful comparison would be somewhere in between.

Simulation results

I will state theoretical results in the next section, but I’d like to begin with a simulation.

First, some code to generate a DNA sequence of length N.

    from numpy.random import choice

    def dna_sequence(N):
        return "".join(choice(list("ACGT"), N))

Next, we need a way to find the longest common substring. We’ll illustrate how SequenceMatcher works before using it in a simulation.

    from difflib import SequenceMatcher

    N = 12
    seq1 = dna_sequence(N)
    seq2 = dna_sequence(N)   
 
    # "None" tells it to not consider any characters junk
    s = SequenceMatcher(None, seq1, seq2)

    print(seq1)
    print(seq2)
    print(s.find_longest_match(0, N, 0, N))

This produced

    TTGGTATCACGG
    GATCACAACTAC
    Match(a=5, b=1, size=5)

meaning that the first match of maximum length, ATCAC in this case, starts in position 5 of the first string and position 1 of the second string (indexing from 0).

Now let’s run a simulation to estimate the expected length of the longest common substring in two sequences of DNA of length 300. We’ll do 1000 replications of the simulation to compute our average.

    def max_common_substr_len(a, b):
        # The None's turn off junk detection heuristic
        s = SequenceMatcher(None, a, b, None)
        m = s.find_longest_match(0, len(a), 0, len(b))
        return m.size

    seqlen = 300
    numreps = 1000
    avg = 0
    for _ in range(numreps):
        a = dna_sequence(seqlen)
        b = dna_sequence(seqlen)
        avg += max_common_substr_len(a, b)
    avg /= numreps
    print(avg)

When I ran this code I got 7.951 as the average. Here’s a histogram of the results.

Expected substring length

Arratia and Waterman [1] published a paper in 1985 looking at the longest common substring problem for DNA sequences, assuming each of the four letters in the DNA alphabet are uniformly and independently distributed.

They show that for strings of length N coming from an alphabet of size b, the expected length of the longest match is asymptotically

2 logb(N).

This says in our simulation above, we should expect results around 2 log4(300) = 8.229.

There could be a couple reasons our simulation results were smaller than the theoretical results. First, it could be a simulation artifact, but that’s unlikely. I repeated the simulation several times and consistently got results below the theoretical expected value. I expect Arratia and Waterman’s result, although exact in the limit, is biased upward for finite n.

Update: The graph below plots simulation results for varying sequence sizes compared to the asymptotic value, the former always below the latter.

Simulation vs asymptotic estimate

Arratia and Waterman first assume i.i.d. probabilities to derive the result above, then prove a similar theorem under the more general assumption that the sequences are Markov chains with possibly unequal transition probabilities. The fact that actual DNA sequences are not exactly uniformly distributed and not exactly independent doesn’t make much difference to their result.

Arratia and Waterman also show that allowing for a finite number of deletions, or a number of deletions that grows sufficiently slowly as a function of n, does not change their asymptotic result. Clearly allowing deletions makes a difference for finite n, but the difference goes away in the limit.

Related posts

[1] Richard Arratia and Michael S. Waterman. An Erdős-Rényi Law with Shifts. Advances in Mathematics 55, 13-23 (1985).

 

Leap seconds

world clock

We all learn as children that there are 60 seconds in a minute, 60 minutes in an hour, 24 hours in a day, and 365 days in a year.

Then things get more complicated. There are more like 365.25 days in a year, hence leap years. Except that’s quite right either. It’s more like 365.242 days in a year, which is why years divisible by 100 do not have leap years unless they’re also divisible by 400. So even though 2000 was a leap year, 2100 will not be.

The ratio of the time it takes Earth to circle the sun to the time it takes it to rotate on its axis is not an integer, and not even a nice fraction. Why should it be? It’s convenient that the ratio is approximately 365 ¼, and that’s good enough for many purposes, but that’s not the full story. And not only is the ratio not a nice fraction, it’s not even constant!

The earth’s rotation is slowing imperceptibly. Atomic clocks made is possible to measure how much it’s slowing down, but also make it necessary to measure. Now that atomic time is widely used, say to synchronize computer networks, the discrepancy between atomic time and astronomical observation matters.

Leap seconds have been inserted into the year occasionally to keep atomic time in sync with time relative to Earth’s rotation. These cannot be inserted on a regular basis like leap days because the change in Earth’s rotation is not regular. So committees have to decide how and when to insert leap seconds, and the process can be surprisingly acrimonious.

Unix time

It is often said that Unix time is the number of seconds since the “Epoch,” midnight of January 1, 1970. But it’s not that simple because leap seconds are not included.

Suppose you were in London a few weeks ago. If you were sitting at a Linux command prompt to ring in the New Year and typed

    date +%s

at the stroke of midnight, the result would have been 1577836800. This number factors as

1577836800 = 60 × 60 × 24 × (365 × 50 + 12)

corresponding to the number of seconds in a day times the number of days since New Year’s Day 1970, including 12 leap days. However, if a device with an atomic clock beeped once per second since the Epoch, it would beep for the 1577836827th time as 2020 began because there have been 27 leap seconds since then.

International Atomic Time

If you don’t want to deal with Daylight Saving Time, you can use Universal Coordinated Time (UTC) [1]. If you want to go a step further and not deal with leap seconds, then International Atomic Time (TAI) is for you [2].

With TAI, every day has exactly 86,400 seconds by definition. When that many seconds pass, it’s a new day. Makes things very simple, as long as you don’t have to make reference to UTC, which tries to accommodate Earth’s rotation.

At the moment, TAI is 37 seconds ahead of UTC. There have been 27 leap seconds since leap seconds were first instituted, and TAI started out 10 seconds ahead. The next time we add a leap second [3], TAI will be ahead by 38 seconds.

More posts on time

[1] Why is this not UCT? Because the French wanted to call it TUC for temps universel coordonné. The compromise was UTC, which doesn’t make sense in English or French.

[2] The French won this battle: TAI stands for temps atomique international.

[3] There are proposals to stop adding leap seconds, though the issue has been contentious. Maybe we won’t add any more leap seconds. Or maybe we’ll let them accumulate and add them all in at once, analogous to when several days were removed when converting from the Julian calendar to the Gregorian calendar.

Reminder of complexity

The other day I ran across the biochemical pathways poster from Roche.

biochemical pathways thumbnail

This is the first of two posters. Both posters are about 26 square feet in area. Below is a close up of about one square foot in the middle of the first poster.

middle of b8ochemical pathways poster

I’d seen this before, I think while I was working at MD Anderson Cancer Center, and it’s quite an impressive piece of work. A lot of work went into the production of the posters themselves, but the amount of hard-won knowledge the posters represent is mind-boggling. One little arrow on one poster might represent a career’s work.

A paper distributed with the charts explains that the pathways included represent a small selection of what is known.

Some indication of the degree of selection can be taken from the fact that in the present “Pathways” about 1,000 enzymes are shown. … Estimations of the number of proteins (with and without enzymatic activity) in a single mammalian cell are in the order of magnitude of 30,000.

I told a friend that I was thinking about getting a copy of the poster as a reminder of complexity, an analog of a memento mori meant to serve as a reminder of one’s mortality. The Latin phrase memento mori means “remember that you must die.” The biochemical pathways makes me thing “remember that you are complex” or “remember the world is complex.”

I asked a Latin scholar what an appropriate aphorism would be, and she suggested the phrase memento complexitatis, which translates as “be mindful of complexity.” Another suggestion was omnia contexta sunt, meaning “all things have been braided.” As Rich Hickey explains in his popular video, complex literally means braided together.

Everything impacts everything. Independence is always a simplifying assumption. The assumption may be justified, but it is an assumption.

The poster is also a reminder that we need not throw up our hands in the face of complexity. Often the implication of mathematical discussions of complexity is that predicting the future is hopeless; there’s no point trying to understand the behavior of a complex system except in some abstract qualitative way. The Roche posters are inspiring reminders that with hard work, we can learn something about complex systems. In fact, over time we can learn quite a lot.

Related posts

Orbital resonance in Neptune’s moons

Phys.com published an article a couple days ago NASA finds Neptune moons locked in ‘dance of avoidance’. The article is based on the scholarly paper Orbits and resonances of the regular moons of Neptune.

The two moons closest to Neptune, named Naiad and Thalassa, orbit at nearly the same distance, 48,224 km for Naiad and 50,074 km for Thalassa. However, the moons don’t come as close to each other as you would expect from looking just at the radii of their orbits.

Although the radii of the orbits differ by 1850 km, the two moons do not get closer than about 2800 km apart. The reason has to do with the inclination of the two orbital planes with each other, and a resonance between their orbital periods. When the moons approach each other, one dips below the other, increasing their separation.

Assume the orbits of both moons are circular. (They very nearly are, with major and minor axes differing by less than a kilometer.)  Also, choose a coordinate system so that Thalassa orbits in the xy plane. The position of Thalassa at time t is

rT  (cos 2πt/TT, sin 2πt/TT, 0)

where rT is the radius of Thalassa’s orbit and TT is its period.

The orbit of Naiad is inclined to that of Thalassa by an angle u. The position of Naiad at time t is

rN (cos u cos 2πt/TN, sin 2πt/TN, -sin u cos 2πt/TN).

I tried implementing the model above using data from Wikipedia articles on the two moons, but in my calculations the moons get closer than reported. I suspect the parameters have to be set carefully in order to demonstrate the kind of resonance we see in observation, and we don’t know these parameters accurately enough to make a direct geometric approach like this work.

from numpy import *

# Orbital radii in km 
r_T = 50074.44
r_N = 48224.41

# Period in days
T_T = 0.31148444
T_N = 0.29439580

def thalassa(t):

    # frequency
    f = 2*pi/T_T

    return r_T * array([
        cos(f*t),
        sin(f*t),
        0
    ])

def naiad(t):
    
    # inclination in radians relative to Thalassa
    i = 0.082205
    
    # frequency
    f = 2*pi/T_N

    return r_N * array([
        cos(i)*cos(f*t),
        sin(f*t),
        -sin(i)*cos(f*t)
    ])

def dist(t):
    return sum((naiad(t) - thalassa(t))**2)**0.5

d = [dist(t) for t in linspace(0, 10*73*T_N, 1000)]
print(d[0], min(d))

In this simulation, the moons are 4442 km apart when t = 0, but this is not the minimum distance. The code above shows an instance where the distance is 2021 km. I tried minor tweaks, such as adding a phase shift to one of the planets or varying the angle of inclination, but I didn’t stumble on anything that cane closer to reproducing the observational results. Maybe resonance depends on factors I’ve left out. Naiad and Thalassa are practically massless compared to Neptune, but perhaps the simulation needs to account for their mass.

The periods of the two moons are nearly in a ratio of 69 to 73. We could idealize the problem by making the periods rational numbers in exactly a ratio of 69 to 73. Then the system would be exactly periodic, and we could find the minimum distance. Studying a model system close to the real one might be useful. If we do tweak the periods, we’d need to tweak the radii as well so that Kepler’s third law applies, i.e. we’d need to keep the squares of the periods proportional to the cubes of the radii.

Related post: Average distance between planets

What exactly is a day?

How many days are in a year? 365.

How many times does the earth rotate on its axis in a year? 366.

If you think a day is the time it takes for earth to rotate once around its axis, you’re approximately right, but off by about four minutes.

What we typically mean by “day” is the time it takes for earth to return to the same position relative to the sun, i.e. the time from one noon to the next. But because the earth is orbiting the sun as well as rotating on its axis, it has to complete a little more than one rotation around its axis to bringsi the sun back to the same position.

Since there are approximately 360 days in a year, the earth has to rotate about 361 degrees to bring the sun back into the same position. This extra rotation takes 1/360th of a day, or about 4 minutes. (We’ll be more precise below.)

Here’s another way to see that the number of rotations has to be more than the number days in a year. To make the numbers simple, assume there are 360 days in a year. Suppose you’re looking up at the noon sun one day. Next suppose it is exactly 180 days later and the earth had rotated 180 times. Because the earth is on the opposite side of the sun, you would be on the opposite side of the sun too. For it to be noon again 180 days later, the earth would have to have made 180.5 rotations, making 361 rotations in a 360-day year.

Sidereal days

Imagine an observer looking straight down at the Earth’s north pole from several light years away. Imagine also an arrow from the center of the Earth to a fixed point on the equator. The time it takes for the observer to see that arrow return to same angle is a sidereal day. The time it takes for that arrow to go from pointing at the sun to pointing at the sun again is a solar day, which is about four minutes longer [1].

If you assume a year has 365.25 (solar) days, but also assume Earth is in a circular orbit around the sun, you’ll calculate a sidereal day to be about 3 minutes and 57 seconds shorter than a solar day. Taking the elliptical shape of Earth’s orbit into account takes about a second off that amount.

More astronomy posts

[1] Technically this is an apparent solar day, a solar day from the perspective of an observer. The length of an apparent solar day varies through the year, but we won’t go into that here.

How fast were dead languages spoken?

A new paper in Science suggests that all human languages carry about the same amount of information per unit time. In languages with fewer possible syllables, people speak faster. In languages with more syllables, people speak slower.

Researchers quantified the information content per syllable in 17 different languages by calculating Shannon entropy. When you multiply the information per syllable by the number of syllables per second, you get around 39 bits per second across a wide variety of languages.

If a language has N possible syllables, and the probability of the ith syllable occurring in speech is pi, then the average information content of a syllable, as measured by Shannon entropy, is

-\sum_{i=1}^N p_i \log_2 p_i

For example, if a language had only eight possible syllables, all equally likely, then each syllable would carry 3 bits of information. And in general, if there were 2n syllables, all equally likely, then the information content per syllable would be n bits. Just like n zeros and ones, hence the term bits.

Of course not all syllables are equally likely to occur, and so it’s not enough to know the number of syllables; you also need to know their relative frequency. For a fixed number of syllables, the more evenly the frequencies are distributed, the more information is carried per syllable.

If ancient languages conveyed information at 39 bits per second, as a variety of modern languages do, one could calculate the entropy of the language’s syllables and divide 39 by the entropy to estimate how many syllables the speakers spoke per second.

According to this overview of the research,

Japanese, which has only 643 syllables, had an information density of about 5 bits per syllable, whereas English, with its 6949 syllables, had a density of just over 7 bits per syllable. Vietnamese, with its complex system of six tones (each of which can further differentiate a syllable), topped the charts at 8 bits per syllable.

One could do the same calculations for Latin, or ancient Greek, or Anglo Saxon that the researches did for Japanese, English, and Vietnamese.

If all 643 syllables of Japanese were equally likely, the language would convey -log2(1/637) = 9.3 bits of information per syllable. The overview says Japanese carries 5 bits per syllable, and so the efficiency of the language is 5/9.3 or about 54%.

If all 6949 syllables of English were equally likely, a syllable would carry 12.7 bits of information. Since English carries around 7 bits of information per syllable, the efficiency is 7/12.7 or about 55%.

Taking a wild guess by extrapolating from only two data points, maybe around 55% efficiency is common. If so, you could estimate the entropy per syllable of a language just from counting syllables.

Related posts

Star-crossed lovers

A story in The New Yorker quotes the following explanation from Arthur Eddington regarding relativity and the speed of light.

Suppose that you are in love with a lady on Neptune and that she returns the sentiment. It will be some consolation for the melancholy separation if you can say to yourself at some—possibly prearranged—moment, “She is thinking of me now.” Unfortunately a difficulty has arisen because we have had to abolish Now … She will have to think of you continuously for eight hours on end in order to circumvent the ambiguity of “Now.”

This reminded me of The Book of Strange New Things. This science fiction novel has several themes, but one is the effect of separation on a relationship. Even if you have faster-than-light communication, how does it effect you to know that your husband or wife is light years away? The communication delay might be no more than was common before the telegraph, but the psychological effect is different.

Related post: Eddington’s constant