S and C functions

I was reading a book on orbital mechanics recently [1], and one of the things that stood out was the use of two variations on sine and cosine, functions the book denotes by S and C.

\begin{align*} S(z) &= \frac{\sqrt{z} - \sin(\sqrt{z})}{\sqrt{z^3}} = \sum_{k=0}^\infty \frac{(-z)^k}{(2k+3)!} \\ C(z) &= \frac{\phantom{\sqrt{}}1 - \cos(\sqrt{z})}{z} = \sum_{k=0}^\infty \frac{(-z)^k}{(2k+2)!} \end{align*}

Strictly speaking, the functions are defined to be the analytic continuation of the middle expressions to the full complex plane, and these continuations have the given power series.

These functions come up in deriving time of flight equations. The direct derivation has expressions that would have a singularity at zero because of square root terms, and recasting the solution in terms of the functions above avoids the singularities.

(Are these functions used outside of orbital mechanics? Do they have names other than S and C? If you’ve seen them before, please let me know.)

I find these functions interesting for several reasons. First, it’s interesting that the power series for the two functions are so similar.

Second, the functions S and C are derived from sine and cosine, but you might not guess that from looking at their plots.

The power series forms of the two functions are similar, but the plots of the two functions are not.

Third, you might expect the derivative of a sine-like thing to be a cosine-like thing and vice versa. Well, that’s sorta true in our case. The derivatives of S and C are related to the functions S and C, but its a little complicated.

\begin{align*} S'(z) &= \frac{1}{2z} (C - 3S) \\ C'(z) &= \frac{1}{2z} (1 - 2C - zS) \end{align*}

Maybe it’s surprising that the relationship is as complicated as it is, or maybe it’s surprising that it’s not more complicated.

Finally, when z is a real variable, we can evaluate S with

S(x) = \left\{ \begin{array}{ll} \big(\sqrt{x} - \sin(\sqrt{x})\big)/\sqrt{x^3} & \mbox{if } x > 0 \\ 1/6 & \mbox{if } x = 0 \\ \big(\sinh(\sqrt{-x})- \sqrt{-x}\big)/\sqrt{(-x)^3} & \mbox{if } x < 0 \end{array} \right.

and evaluate C with

C(x) = \left\{ \begin{array}{ll} \big(1-\cos\sqrt{x}\big)/x & \mbox{if } x > 0 \\ 1/2 & \mbox{if } x = 0 \\ \big(1-\cosh\sqrt{-x}\big)/x & \mbox{if } x < 0 \end{array} \right.

Written this way, it looks like the functions S and C are arbitrarily patched together. But the functions are smooth at 0, even analytic at 0 as the power series shows. The equations are not arbitrary at all but uniquely determined.

Update: Someone contributed this post to Hacker News and someone else left a helpful comment with some historical background. Apparently this work goes back to Karl Stumpff and was written up in a NASA technical report. It was further developed at MIT and written up in a textbook by Richard Battin.

More orbital mechanics posts

[1] Fundamentals of Astrodynamics by Roger R Bate, Donald D. Mueller, and Jerry E. White. Dover, 1971.

Efficiently solving Kepler’s equation

A couple years ago I wrote a blog post on Kepler’s equation

M + e sin EE.

Given mean anomaly M and eccentricity e, you want to solve for eccentric anomaly E.

There is a simple way to solve this equation. Define

f(E) = M + e sin E

and take an initial guess at the solution and stick it into f. Then take the output and stick it back into f, over and over, until you find a fixed point, i.e. f(E) = E.

Faster solutions

The algorithm above is elegant, and practical if you only need to do it once. However, if you need to solve Kepler’s equation billions of times, say in the process of tracking satellite debris, this isn’t fast enough.

An obvious improvement would be to use Newton’s root-finding method rather than the simple iteration scheme above, and this isn’t far from the state of the art. However, there have been improvements over Newton’s method, and a paper posted on arXiv this week gives an algorithm that is about 3 times faster than Newton’s method [1].

Patterns in applied math

This paper is an example of a common pattern in applied math. It starts with a simple problem that has a simple solution, but this simple solution doesn’t scale. And so we apply advanced mathematics to a problem formulated in terms of elementary mathematics.

In particular, the paper makes use of contour integration. This seems like a step backward in two ways.

First, we have a root-finding problem, but you want to turn it into an integration problem?! Isn’t root-finding faster than integration? Not in this case.

Second, not only are we introducing integration, we’re introducing integration in the complex plane. Isn’t complex analysis complex? Not in the colloquial sense. The use of “complex” as a technical term is unfortunate because complex analysis often simplifies problems. As Jacques Hadamard put it,

The shortest path between two truths in the real domain passes through the complex domain.

Related posts

[1] Oliver H. E. Philcox, Jeremy Goodman, Zachary Slepian. Kepler’s Goat Herd: An Exact Solution for Elliptical Orbit Evolution. arXiv:2103.15829

NASA’s favorite ODE solver

NASA’s Orbital Flight Handbook, published in 1963, is a treasure trove of technical information, including a section comparing the strengths and weaknesses of several numerical methods for solving differential equations.

The winner was a predictor-corrector scheme known as Gauss-Jackson, a method I have not heard of outside of orbital mechanics, but one apparently particularly well suited to NASA’s needs.

The Gauss-Jackson second-sum method is strongly recommended for use in either Encke or Cowell [approaches to orbit modeling]. For comparable accuracy, it will allow step-sizes larger by factors of four or more than any of the forth order methods. … As compared with unsummed methods of comparable accuracy, the Gauss-Jackson method has the very important advantage that roundoff error growth is inhibited. … The Gauss-Jackson method is particularly suitable on orbits where infrequent changes in the step-size are necessary.

Here is a table summarizing the characteristics of each of the solvers.

Notice that Gauss-Jackson is the only method whose roundoff error accumulation is described as “excellent.”

A paper from 2004 [1] implies that the Gauss-Jackson method was still in use at NASA at the time of writing.

The Gauss-Jackson multi-step predictor-corrector method is widely used in numerical integration problems for astrodynamics and dynamical astronomy. The U.S. space surveillance centers have used an eighth-order Gauss-Jackson algorithm since the 1960s.

I could imagine a young hotshot explaining to NASA why they should use some other ODE solver, only to be told that the agency had already evaluated the alternatives half a century ago, and that the competitors didn’t have the same long-term accuracy.

More math and space posts

[1] Matthew M. Berry and Liam M. Healy. Implementation of the Gauss-Jackson Integration for Orbit Propagation. The Journal of the Astronautical Sciences, Vol 52, No 3, July-September 2004, pp. 311–357.

Hohmann transfer orbit

How does a spacecraft orbiting a planet move from one circular orbit to another? It can’t just change lanes like a car going around a racetrack because speed and altitude cannot be changed independently.

The most energy-efficient way to move between circular orbits is the Hohmann transfer orbit [1]. The Hohmann orbit is an idealization, but it approximates maneuvers actually done in practice.

The Hohmann transfer requires applying thrust twice: once to leave the first circular orbit into the elliptical orbit, and once again to leave the elliptical orbit for the new circular orbit.

Hohmann transfer orbit

Suppose we’re in the orbit represented by the inner blue circle above and we want to move to the outer green circle. We apply our first instantaneous burst of thrust, indicated by the inner ×, and that puts us into the orange elliptical orbit.

(We can’t move faster in our current orbit without continually applying trust because velocity determines altitude. The new orbit will pass through the point at which we applied the thrust, and so our new orbit cannot be a circle because distinct concentric circles don’t intersect.)

The point at which we first apply thrust will be the point of the new orbit closest the planet, the point with maximum kinetic energy. The point furthest from the planet, the point with maximum potential energy, will occur 180° later on the opposite side. The first burst of thrust is calculated so that the maximum altitude of the resulting elliptical orbit is the desired altitude of the new circular orbit.

Once the elliptical orbit is at its maximum distance from the planet, marked by the outer ×, we apply the second thrust.  The amount of thrust is whatever it needs to be in order to maintain a circular orbit at the new altitude. The second half of the elliptical orbit, indicated by the dashed orange curve, is not taken; it’s only drawn to show the orbit we would stay on if we didn’t apply the second thrust.

So in summary, we use one burst of thrust to enter an elliptic orbit, and one more burst of thrust to leave that elliptical orbit for the new circular orbit. There are ways to move between circular orbits more quickly, but they require more fuel.

The same principles work in reverse, and so you could also use a Hohmann transfer to descend from a higher orbit to a lower one. You would apply your thrust opposite direction of motion.

There are several idealizations to the Hohmann transfer orbit. The model assume orbits are planar, that the initial orbit and the final orbit are circular, and that the two burns are each instantaneous.

The Hohmann transfer also assumes that the mass of the spacecraft is negligible compared to the planet. This would apply, for example, to a typical communication satellite, but perhaps not to a Death Star.

More orbital mechanics posts

[1] If you’re moving from one orbit to another at 12 times the radius, then the bi-elliptic orbit maneuver would use less fuel. Instead of taking half of an elliptical orbit to make the transfer, it fires thrusters three times, using half each of two different elliptical orbits to reach the desired circular orbit.

New math for going to the moon

spacecraft rendezvous

Before I went to college, I’d heard that it took new math and science for Apollo to get to the moon. Then in college I picked up the idea that Apollo required a lot of engineering, but not really any new math or science. Now I’ve come full circle and have some appreciation for the math research that was required for the Apollo landings.

Celestial mechanics had been studied long before the Space Age, but that doesn’t mean the subject was complete. According to One Giant Leap,

In the weeks after Sputnik, one Langley [Research Center] scientist went looking for books on orbital mechanics—how to fly in space—and in the Langley technical library he found exactly one: Forest R. Moulton’s [1] An Introduction to Celestial Mechanics. In 1958 Langley was in possession of one of the most recent editions of Moulton: the 1914 update of the 1902 edition.

I have a quibble with part of the quote above. The author describes orbital mechanics as “how to fly in space.” More technically, at the time, orbital mechanics was “how things fly through space.” Orbital mechanics was passive. You wanted to know how, for example, Titan moves around Saturn. Nobody asked about the most efficient way to change the orbit of Titan so that it ends up at a certain place at a certain time.

NASA needed active orbital mechanics. It had to do more than simply describe existing orbits; it had to design orbits. And it had to control orbits. None of the terms in your equations are known to infinite precision, so it is not enough to understand the exact equations under ideal circumstances. You have to understand how uncertainties in the parts impact the whole, and how to adjust for them.

And all this has to be done in a computer with about 500 kilobits of ROM [2]. Because the computer memory was limited, NASA had to know which terms in the equations could be dropped, what approximations could be made, etc. Understanding how to approximate a system well with limited resources is much harder than working with exact equations [3].

Nobody at NASA would have said “We’ve got the math in the bag. Now we just need the engineers to get busy.”

Related posts

[1] This is the same Moulton of Adams-Moulton and Adams-Bashforth-Moulton numerical methods for solving differential equations. Presumably Mr. Moulton’s interest in numerical solutions to differential equations came out of his interest in celestial mechanics. See where Adams-Moulton fits into the ODE solver landscape in the next post.

[2] Each word in the Apollo Guidance Computer was 15 bits of data plus one check bit. There were 2048 words of RAM, 36,864 words of ROM. This amounts to 552,960 bits of ROM, excluding check bits, as much as 68 kilobytes using 8-bit bytes.

[3] Not that the “exact” equations are actually exact. When you write down the equations of motion for three point masses, for example, you’ve already done a great deal of simplification.

The orbit that put men on the moon

Richard Arenstorf (1929–2014) discovered a stable periodic orbit between the Earth and the Moon which was used as the basis for the Apollo missions.

His orbit is a special case of the three body problem where two bodies are orbiting in a plane, i.e. the Earth and the Moon, along with a third body of negligible mass relative to the other bodies, i.e. the satellite.

The system of differential equations for the Arenstorf orbit are

\begin{align*} x'' &= x + 2y' - \mu' \frac{x+\mu}{D_1} - \mu \frac{x - \mu'}{D_2} \\ y'' &= y - 2x' - \mu' \,\frac{y}{\,D_1} \,\,\,\,\,\,- \mu \frac{y}{\,D_2} \\ \end{align*}


\begin{align*} D_1 &= ((x + \mu)^2 \,+ y^2)^{3/2} \\ D_2 &= ((x - \mu')^2 + y^2)^{3/2} \end{align*}

Here the Earth is at the origin and the Moon is initially at (0, 1). The mass of the Moon is μ = 0.012277471 and the mass of the Earth is μ’ = 1-μ.

The initial conditions are

\begin{align*} x(0) &= \phantom{-}0.994 \\ x'(0) &= \phantom{-}0 \\ y(0) &= \phantom{-}0 \\ y'(0) &= -2.001585106 \end{align*}

Here’s a plot of the orbit.

Arenstorf orbit

I found the equations above in [1] which sites the original paper [2]. Note that the paper was written in 1963, seven years before the Apollo missions. Also, before leaving NASA Arenstorf mapped out a rescue orbit. This orbit was later used on Apollo 13.

Richard Arenstorf

I was fortunate to do my postdoc at Vanderbilt before Arenstorf retired and was able to sit in on an introductory course he taught on orbital mechanics. His presentation was leisurely and remarkably clear.

His course was old-school “hard analysis,” much more concrete than the abstract “soft analysis” I had studied in graduate school. He struck me as a 19th century mathematician transported to the 20th century.

He scoffed at merely measurable functions. “Have you ever seen a function that wasn’t analytic?” This would have been heresy at my alma mater.

When I asked him about “Arenstorf’s theorem” from a recently published book I was reading, he said that he didn’t recognize it. I forget now how it was stated, maybe involving Banach spaces and/or manifolds. Arenstorf was much more concrete. He wanted to help put a man on the Moon, not see how abstractly he could state his results.

More orbital mechanics posts

[1] Hairer, Nørsett, and Wanner. Solving Ordinary Differential Equations I: Nonstiff Problems. Springer-Verlag 1987.

[2] Richard F. Arenstorf. Periodic Solutions of the Restricted Three Body Problem Representing Analytic Continuations of Keplerian Elliptic Motion. American Journal of Mathematics, Vol. 85, No. 1 (Jan., 1963), pp. 27–35.

Orbital resonance in Neptune’s moons

Phys.com published an article a couple days ago NASA finds Neptune moons locked in ‘dance of avoidance’. The article is based on the scholarly paper Orbits and resonances of the regular moons of Neptune.

The two moons closest to Neptune, named Naiad and Thalassa, orbit at nearly the same distance, 48,224 km for Naiad and 50,074 km for Thalassa. However, the moons don’t come as close to each other as you would expect from looking just at the radii of their orbits.

Although the radii of the orbits differ by 1850 km, the two moons do not get closer than about 2800 km apart. The reason has to do with the inclination of the two orbital planes with each other, and a resonance between their orbital periods. When the moons approach each other, one dips below the other, increasing their separation.

Assume the orbits of both moons are circular. (They very nearly are, with major and minor axes differing by less than a kilometer.)  Also, choose a coordinate system so that Thalassa orbits in the xy plane. The position of Thalassa at time t is

rT  (cos 2πt/TT, sin 2πt/TT, 0)

where rT is the radius of Thalassa’s orbit and TT is its period.

The orbit of Naiad is inclined to that of Thalassa by an angle u. The position of Naiad at time t is

rN (cos u cos 2πt/TN, sin 2πt/TN, -sin u cos 2πt/TN).

I tried implementing the model above using data from Wikipedia articles on the two moons, but in my calculations the moons get closer than reported. I suspect the parameters have to be set carefully in order to demonstrate the kind of resonance we see in observation, and we don’t know these parameters accurately enough to make a direct geometric approach like this work.

from numpy import *

# Orbital radii in km 
r_T = 50074.44
r_N = 48224.41

# Period in days
T_T = 0.31148444
T_N = 0.29439580

def thalassa(t):

    # frequency
    f = 2*pi/T_T

    return r_T * array([

def naiad(t):
    # inclination in radians relative to Thalassa
    i = 0.082205
    # frequency
    f = 2*pi/T_N

    return r_N * array([

def dist(t):
    return sum((naiad(t) - thalassa(t))**2)**0.5

d = [dist(t) for t in linspace(0, 10*73*T_N, 1000)]
print(d[0], min(d))

In this simulation, the moons are 4442 km apart when t = 0, but this is not the minimum distance. The code above shows an instance where the distance is 2021 km. I tried minor tweaks, such as adding a phase shift to one of the planets or varying the angle of inclination, but I didn’t stumble on anything that cane closer to reproducing the observational results. Maybe resonance depends on factors I’ve left out. Naiad and Thalassa are practically massless compared to Neptune, but perhaps the simulation needs to account for their mass.

The periods of the two moons are nearly in a ratio of 69 to 73. We could idealize the problem by making the periods rational numbers in exactly a ratio of 69 to 73. Then the system would be exactly periodic, and we could find the minimum distance. Studying a model system close to the real one might be useful. If we do tweak the periods, we’d need to tweak the radii as well so that Kepler’s third law applies, i.e. we’d need to keep the squares of the periods proportional to the cubes of the radii.

Related post: Average distance between planets