Calculating when a planet will appear to move backwards

When we say that the planets in our solar system orbit the sun, not the earth, we mean that the motions of the planets is much simpler to describe from the vantage point of the sun. The sun is no more the center of the universe than the earth is. Describing the motion of the planets from the perspective of our planet is not wrong, but it is inconvenient. (For some purposes. It’s quite convenient for others.)

The word planet means “wanderer.” This because the planets appear to wander in the night sky. Unlike stars that appear to orbit the earth in a circle as the earth orbits the sun, planets appear to occasionally reverse course. When planets appear to move backward this is called retrograde motion.

Apparent motion of Venus

Here’s what the motion of Venus would look like over a period of 8 years as explored here.

Venus from Earth

Venus completes 13 orbits around the sun in the time it takes Earth to complete 8 orbits. The ratio isn’t exactly 13 to 8, but it’s very close. Five times over the course of eight years Venus will appear to reverse course for a few days. How many days? We will get to that shortly.

When we speak of the motion of the planets through the night sky, we’re not talking about their rising and setting each day due to the rotation of the earth on its axis. We’re talking about their motion from night to night. The image above is how an observer far above the Earth and not rotating with the Earth would see the position of Venus over the course of eight years.

The orbit of Venus as seen from earth is beautiful but complicated. From the Copernican perspective, the orbits of Earth and Venus are simply concentric circles. You may bristle at my saying planets have circular rather than elliptical orbits [1]. The orbits are not exactly circles, but are so close to circular that you cannot see the difference. For the purposes of this post, we’ll assume planets orbit the sun in circles.

Calculating retrograde periods

There is a surprisingly simple equation [2] for finding the points where a planet will appear to change course:

\cos(kt) = \frac{\sqrt{Rr}}{R + r - \sqrt{Rr}}

Here r is the radius of Earth’s orbit and R is the radius of the other planet’s orbit [3]. The constant k is the difference in angular velocities of the two planets. You can solve this equation for the times when the apparent motion changes.

Note that the equation is entirely symmetric in r and R. So Venusian observing Earth and an Earthling observing Venus would agree on the times when the apparent motions of the two planets reverse.

Example calculation

Let’s find when Venus enters and leaves retrograde motion. Here are the constants we need.

r = 1       # AU
R = 0.72332 # AU

venus_year = 224.70 # days
earth_year = 365.24 # days
k = 2*pi/venus_year - 2*pi/earth_year

c = sqrt(r*R) / (r + R - sqrt(r*R))

With these constants we can now plot cos(kt) and see when it equals c.

This shows there are five times over the course of eight years when Venus is in apparent retrograde motion.

If we set time t = 0 to be a time when Earth and Venus are aligned, we start in the middle of retrograde period. Venus enters prograde motion 21 days later, and the next retrograde period begins at day 563. So out of every 584 days, Venus spends 42 days in retrograde motion and 542 days in prograde motion.

Related posts

[1] Planets do not exactly orbit in circles. They don’t exactly orbit in ellipses either. Modeling orbits as ellipses is much more accurate than modeling orbits as circles, but not still not perfectly accurate.

[2] 100 Great Problems of Elementary Mathematics: Their History and Solution. By Heinrich Dörrie. Dover, 1965.

[3] There’s nothing unique about observing planets from Earth. Here “Earth” simply means the planet you’re viewing from.

A “well-known” series

I was reading an article [1] that refers to “a well-known trigonometric series” that I’d never seen before. This paper cites [2] which gives the series as

\begin{align*} \frac{\sin m\phi}{\cos \phi} &= m\sin\phi - \frac{m(m^2-2^2)}{3!}\sin^3\phi \\ &\phantom{=} \;+ \frac{m(m^2-2^2)(m^2 - 4^2)}{5!}\sin^5\phi - \cdots \end{align*}

Note that the right hand side is not a series in φ but rather in sin φ.

Motivation

Why might you know sin φ and want to calculate sin mφ / cos φ? This doesn’t seem like a sufficiently common task for the series to be well-known. The references are over a century old, and maybe the series were useful in hand calculations in a way that isn’t necessary anymore.

However, [1] was using the series for a theoretical derivation, not for calculation; the author was doing some hand-wavy derivation, sticking the difference operator E into a series as if it were a number, a technique known as “umbral calculus.” The name comes from the Latin word umbra for shadow. The name referred to the “shadowy” nature of the technique which wasn’t made rigorous until much later.

Convergence

The series above terminates if m is an even integer. But there are no restrictions on m, and in general the series is infinite.

The series obviously has trouble if cos φ = 0, i.e. when φ = ±π/2, but it converges for all m if −π/2 < φ < π/2.

Tangent

If m = 1, sin mφ / cos φ is simply tan φ. The function tan φ has a complicated power series in φ involving Bernoulli numbers, but it has a simpler power series in sin φ.

References

[1] G. J. Lidstone. Notes on Everett’s Interpolation Formula. 1922

[2] E. W. Hobson. A Treatise on Plane Trigonometry. Fourth Edition, 1918. Page 276.

Approximation by prime powers

The well-known Weierstrass approximation theorem says that polynomials are dense in C [0, 1]. That is, given any continuous function f on the unit interval, and any ε > 0, you can find a polynomial P such that f and P are never more than ε apart.

This means that linear combinations of the polynomials

1, x, x², x³, …

are dense in C [0, 1].

Do you need all these powers of x? Could you approximate any continuous function arbitrarily well if you left out one of these powers, say x7? Yes, you could.

You cannot omit the constant polynomial 1, but you can leave out any other power of x. In fact, you can leave out a lot of powers of x, as long as the sequence of exponents doesn’t thin out too quickly.

Müntz approximation theorem

Herman Müntz proved in 1914 that a necessary and sufficient pair of conditions on the exponents of x is that the first exponent is 0 and that the sum of the reciprocals of the exponents diverges.

In other words, the sequence of powers of x

xλ0, xλ1, xλ2, …

with

λ0 < λ1 < λ2

is dense in C [0, 1] if and only if λ0 = 0 and

1/λ1 + 1/λ2 + 1/λ3 + … = ∞

Prime power example

Euler proved in 1737 that the sum of the reciprocals of the primes diverges, so the sequence

1, x2, x3, x5, x7, x11, …

is dense in C [0, 1]. We can find a polynomial as close as we like to any particular continuous function if we combine enough prime powers.

Let’s see how well we can approximate |x − ½| using prime exponents up to 11.

The polynomial above is

0.4605 − 5.233 x2 + 7.211 x3 + 0.9295 x5 − 4.4646 x7 + 1.614 x11.

This polynomial is not the best possible uniform approximation: it’s the least squares approximation. That is, it minimizes the 2-norm and not the ∞-norm. That’s because it’s convenient to do a least squares fit in Python using scipy.optimize.curve_fit.

Incidentally, the Müntz approximation theorem holds for the 2-norm as well.

Related posts

Logarithm approximation curiosity

I’ve written before about three simple approximations for logarithms, for base 10

log10(x) ≈ (x – 1)/(x + 1)

base e,

loge(x) ≈ 2(x – 1)/(x + 1)

and base 2

log2(x) ≈ 3(x – 1)/(x + 1).

These can be used to mentally approximate logarithms to moderate accuracy, accurate enough for quick estimates.

Here’s what’s curious about the approximations: the proportionality constants are apparently wrong, and yet the approximations are each fairly accurate.

It is not the case that

loge(x) = 2 log10(x).

In fact,

loge(x) = loge(10) log10(x) = 2.3 log10(x)

and so it seems that the approximation for natural logarithms should be off by 15%. But it’s not. The error is less than 2.5%.

Similarly,

log2(x) = log2(10) log10(x) = 3.32 log10(x)

and so the approximation for logarithms base 2 should be off by about 10%. But it’s not. The error here is also less than 2.5%.

What’s going on?

First of all, the approximation errors are nonlinear functions of x and the three approximation errors are not proportional. Second, the approximation for logb(x) is only good for 1/√bx ≤ √b. You can always reduce the problem of calculating logb(x) to the problem of calculating the log in the range 1/√bx ≤ √b and so this isn’t a problem.

Here’s a plot of the three error functions.

This plot makes it appear that the approximation error is much worse for natural logs and logs base 2 than for logs base 10. And it would be if we ignored the range of each approximation. Here’s another plot of the approximation errors, plotting each over only its valid range.

When restricted to their valid ranges, the approximations for logarithms base e and base 2 are more accurate than the approximation for logarithms base 10. Both errors are small, but in opposite directions.

Here’s a look at the relative approximation errors.

We can see that the relative errors for the log 2 and log e errors are less than 2.5%, while the relative error for log 10 can be up to 15%.

 

How big will a carpet be when you roll or unroll it?

If you know the dimensions of a carpet, what will the dimensions be when you roll it up into a cylinder?

If you know the dimensions of a rolled-up carpet, what will the dimensions be when you unroll it?

This post answers both questions.

Flexible carpet: solid cylinder

The edge of a rolled-up carpet can be described as an Archimedean spiral. In polar coordinates, this spiral has the equation

r = hθ / 2π

where h is the thickness of the carpet. The previous post gave an exact formula for the length L of the spiral, given the maximum value of θ which we denoted T. It also gave a simple approximation that is quite accurate when T is large, namely

L = hT² / 4π

If r1 is the radius of the carpet as a rolled up cylinder, r1 = hT / 2π and so T = 2π r1 / h. So when we unroll the carpet

L = hT² / 4π = πr1² / h.

Now suppose we know the length L and want to find the radius r when we roll up the carpet.

T = √(hL/π).

Stiff carpet: hollow cylinder

The discussion so far has assumed that the spiral starts from the origin, i.e. the carpet is rolled up so tightly that there’s no hole in the middle. This may be a reasonable assumption for a very flexible carpet. But if the carpet is stiff, the rolled up carpet will not be a solid cylinder but a cylinder with a cylindrical hole in the middle.

In the case of a hollow cylinder, there is an inner radius r0 and an outer radius r1. This means θ runs from T0 = 2π r0/h to T1 = 2πr1/h.

To find the length of the spiral running from T to T1 we find the length of a spiral that runs from 0 to T1 and subtract the length of a spiral from 0 to T0

L = πr1² / h − πr0² / h = π(r1² − r0²)/h.

This approximation is even better because the approximation is least accurate for small T, and we’ve subtracted off that part.

Now let’s go the other way around and find the outer radius r1 when we know the length L. We also need to know the inner radius r0. So suppose we are wrapping the carpet around a cardboard tube of radius r0. Then

r1 = √(r0² + hL/π).

How much will a cable sag? A simple approximation

Suppose you have a cable of length 2s suspended from two poles of equal height a distance 2x apart. Assuming the cable hangs in the shape of a catenary, how much does it sag in the middle?

If the cable were pulled perfectly taut, we would have s = x and there would be no sag. But in general, s > x and the cable is somewhat lower in the middle than on the two ends. The sag is the difference between the height at the end points and the height in the middle.

The equation of a catenary is

y = a cosh(t/a)

and the length of the catenary between t = −x and t = x is

2s = 2a sinh(x/a).

You could solve this equation for a and the sag will be

g = a cosh(x/a) − a.

Solving for a given s is not an insurmountable problem, but there is a simple approximation for g that has an error of less than 1%. This approximation, given in [1], is

g² = (s − x)(sx/2).

To visualize the error, I set x = 1 and let a vary to produce the plot below.

The relative error is always less than 1/117. It peaks somewhere around x = 1/3 and decreases from there, the error getting smaller as a increases (i.e. the cable is pulled tighter).

Related posts

[1] J. S. Frame. An approximation for the dip of a catenary. Pi Mu Epsilon Journal, Vol. 1, No. 2 (April 1950), pp. 44–46

Binary to text to binary

Gnu Privacy Guard includes a way to encode binary files as plain ASCII text files, and turn these text files back into binary. This is intended as a way to transmit encrypted data, but it can be used to convert any kind of file from binary to text and back to binary.

To illustrate this, I’ll use Albrecht Dürer’s Melencolia I as an example. (More on this image and its mathematical significance here.)

Albrecht Dürer’s engraving Melencolia I

This image is saved in the file Melancholia.jpg.

Binary to text

If we run

   gpg --enarmor Melencolia.jpg

at a command line, it produces a file Melancholia.jpg.asc, the asc suffix indicating an ASCII file.

We can look inside this file if we’d like.

-----BEGIN PGP ARMORED FILE-----
Comment: Use "gpg --dearmor" for unpacking

/9j/4AAQSkZJRgABAQEBLAEsAAD/4QBoRXhpZgAATU0AKgAAAAgABAEaAAUAAAAB
AAAAPgEbAAUAAAABAAAARgEoAAMAAAABAAIAAAExAAIAAAARAAAATgAAAAAAAAEs
AAAAAQAAASwAAAABUGFpbnQuTkVUIHYzLjUuOAAA/9sAQwACAQECAQECAgICAgIC
...
wJ/wkzRf8IN4LCCVVwNCtM8sDnPl5zkk565NbcH7Lnw01K0gtrn4feCriB4QfLl0
S2dV+bdgApwMknAwMk+tFFcktjqif//Z
=rpd1
-----END PGP ARMORED FILE-----

The cryptic text /9j/4A…//Z is a base 64 encoded representation of the binary file. Think of the file as one big binary number. Write that number in base 64, i.e. partition the bits into groups of six. Then represent the base 64 “digits” as alphanumeric characters, plus the symbols + and /. More on the details here.

The line =rpd1 is a 24-bit CRC checksum. The equal sign is a separator, and rpd1 is a base 64 encoding the checksum.

The JPG file is 91,272 bytes and the ASCII file is 123,712 bytes. The ASCII file is about 1/3 larger because every six bits in the binary file corresponds to an eight-bit ASCII character. The ASCII file is a little bit more than 1/3 larger because of the human-friendly text above and below the base 64 encoding, the newline characters, and the checksum.

Text to binary

If we run

    gpg --dearmor Melencolia.jpg.asc

at a command line, it produces a file Melancholia.jpg.asc.gpg. This file is bit-for-bit exactly the same as the original file, which we could confirm by running

    diff Melencolia.jpg Melencolia.jpg.asc.gpg

Related posts

A curious pattern in January exponential sums

The exponential sum page on this site draws a new image every day based on plugging the month, day, and year into a formula. Some of these images are visually appealing; I’ve had many people ask if they could use the images in publications or on coffee mugs etc.

The images generally look very different from one day to the next. One reason I include the date numbers in the order I do, using the American convention, is that this increases the variety from one day to the next.

If you first became aware of the page on New Year’s Day this year, you might think the page is broken because there was no apparent change between January 1 and January 2. Yesterday’s image was different, but then today, January 4, the image looks just like the images for January 1st and 2nd. They all look like the image below.

The plots produced on each day are distinct, but they are geometrically congruent.

The exponential sum page displays a plot connecting the partial sums of a certain series given here. The axes are turned off and so only the shape of the plot is displayed. If one plot is a translation or dilation of another, the images shown on the page will be the same.

Here’s a plot of the images for January 1, 2, and 4, plotted red, green, and blue respectively.

This shows that the images are not the same, but are apparently translations of each other.

There’s another difference between the images. Connecting consecutive partial sums draws an image clockwise on January 1, but counterclockwise on January 2 and 4. You can see this by clicking on the “animate” link on each page.

 

Square root factorial

What factorial is closest to the square root of 2024 factorial?

A good guess would be 1012, based on the idea that √(n!) might be near (n/2)!.

This isn’t correct—the actual answer is 1112—but it’s not wildly off.

Could it be that (2n)! is asymptotically (n!)²?

No, Gauss’ duplication formula

\Gamma(2z) = \frac{1}{\sqrt{2\pi}} 2^{2z - 1/2} \Gamma(z) \Gamma\left(z + \frac{1}{2}\right)

shows that the ratio of (2n)! to (n!)² grows exponentially as a function of n.

However, the ratio only grows exponentially, and factorials grow faster than exponentially. I believe that the value of m minimizing

| √(n!) – m! |

asymptotically approaches n/2. In other words, the inverse factorial of  √(n!) approaches n/2. And more generally the inverse factorial of (n!)1/k asymptotically approaches n/k. I haven’t written out a proof, but the plot below shows numerical evidence.

So √(n!) is not asymptotically (n/2)!, but the inverse factorial of √(n!) is asymptotically n/2.

See the next post for a way to compute inverse factorials.

222nd Carnival of Mathematics

A blog carnival is a round up of recent blog posts. The Carnival of Mathematics is a long-running carnival of blog posts on mathematical topics. This is the 222nd edition of the carnival.

Facts about 222

By longstanding tradition, the nth Carnival of Mathematics must begin with trivia about the number n, and so here are five facts about the number 222.

  • There are six groups of order 222, and 222 groups of order 912.
  • 222=9+87+6×5×4+3+2+1.
  • You can encode the number 222 in the Major mnemonic system as “unknown” or “one-on-one.”

The posts

Gil Kalai looks at what happens when you ask ChatGPT to solve Elchanan Mossel’s dice problem.

Γιώργος Πλούσος (@plousos2505 on the social media platform formerly know as Twitter) posted an image showing how to compute π via a sequence of approximations requiring only basic geometry.

David Eppstein’s blog 11011110 has a post on Pyramidology. It’s particularly fitting to have a post from 1101110 in this carnival. As the author explains in his About page,

This journal’s name comes from interpreting my initials as the hexadecimal number 0xDE (decimal 222) and then converting the result to binary.

The blog ThatsMaths has a post on Sharkovsky numbering, Oleksandr Sharkovsky (1936–2022), and Sharkovsky’s Theorem. This post includes the beautiful image from Wikimedia.

The post Patterns of reality by Timothy Williamson gives an overview of logic from basics to the work of Gödel and Turing.

Larissa Fedunik-Hofman posts an interview with Andrew Krause discussing dynamical system.

Finally, we have Matthew Scroggs’ Advent calendar of math puzzles.

To see previous editions of the carnival, or to submit a post to the next edition, see the Carnival of Mathematics page on Aperiodical.