Infinite periodic table

All the chemical elements discovered or created so far follow a regular pattern in how their electrons are arranged: the nth shell contains up to 2n – 1 suborbitals that each contain up to two electrons. For a given atomic number, you can determine how its electrons are distributed into shells and suborbitals using the Aufbau principle.

The Aufbau principle is a good approximation, but not exact. For this post we’ll assume it is exact, and that everything in the preceding paragraph generalizes to an arbitrary number of shells and electrons.

Under those assumptions, what would the periodic table look like if more elements are discovered or created?

D. Weiss worked out the recurrence relations that the periods of the table satisfy and found their solutions.

The number of elements in nth period works out to

   P_n = \frac{(-1)^n(2n+3) + 2n^2 + 6n + 5}{4}

and the atomic numbers of the elements at the end of the nth period (the noble gases) are

Z_n = \frac{(-1)^n(3n+6) + 2n^3 + 12n^2 + 25n - 6}{12}

We can verify that these formulas give the right values for the actual periodic table as follows.

    >>> def p(n): return ((-1)**n*(2*n+3) + 2*n*n + 6*n +5)/4
    >>> def z(n): return ((-1)**n*(3*n+6) + 2*n**3 + 12*n**2 + 25*n - 6)/12
    >>> [p(n) for n in range(1, 8)]
    [2.0, 8.0, 8.0, 18.0, 18.0, 32.0, 32.0]
    >>> [z(n) for n in range(1, 8)]
    [2.0, 10.0, 18.0, 36.0, 54.0, 86.0, 118.0]

So, hypothetically, if there were an 8th row to the periodic table, it would contain 50 elements, and the last element of this row would have atomic number 168.

Related

Chemical element abbreviation patterns

I’ve wondered occasionally about the patterns in how chemical elements are abbreviated. If you don’t know the abbreviation for an element, is there a simple algorithm that would let you narrow the range of possibilities or improve your odds at guessing?

Here’s a survey of how the elements are abbreviated.

Latin and German

The elements that have been known the longest often have abbreviations that are mnemonic in Latin.

  • Iron (Fe)
  • Sodium (Na)
  • Silver (Ag)
  • Tin (Sn)
  • Antimony (Sb)
  • Tungsten (W)
  • Gold (Au)
  • Mercury (Hg)
  • Lead (Pb)
  • Potassium (K)
  • Copper (Cu)

I included Tungsten in this section because it also has an abbreviation that is mnemonic in another language, in this case German.

Initial letter

The easiest abbreviations to remember are simply the first letters of the element names (in English).

  • Boron (B)
  • Carbon (C)
  • Fluorine (F)
  • Hydrogen (H)
  • Iodine (I)
  • Nitrogen (N)
  • Oxygen (O)
  • Phosphorus (P)
  • Sulfur (S)
  • Uranium (U)
  • Vanadium (V)
  • Yttrium (Y)

First two letters

The largest group of elements are those abbreviated by the first two letters of their name. When in doubt, guess the first two letters.

  • Actinium (Ac)
  • Aluminum (Al)
  • Americium (Am)
  • Argon (Ar)
  • Barium (Ba)
  • Beryllium (Be)
  • Bismuth (Bi)
  • Bromine (Br)
  • Calcium (Ca)
  • Cerium (Ce)
  • Chlorine (Cl)
  • Cobalt (Co)
  • Dysprosium (Dy)
  • Erbium (Er)
  • Europium (Eu)
  • Flerovium (Fl)
  • Francium (Fr)
  • Gallium (Ga)
  • Germanium (Ge)
  • Helium (He)
  • Holmium (Ho)
  • Indium (In)
  • Iridium (Ir)
  • Krypton (Kr)
  • Lanthanum (La)
  • Lithium (Li)
  • Lutetium (Lu)
  • Molybdenum (Mo)
  • Neon (Ne)
  • Nickel (Ni)
  • Nobelium (No)
  • Oganesson (Og)
  • Osmium (Os)
  • Polonium (Po)
  • Praseodymium (Pr)
  • Radium (Ra)
  • Rhodium (Rh)
  • Ruthenium (Ru)
  • Scandium (Sc)
  • Selenium (Se)
  • Silicon (Si)
  • Tantalum (Ta)
  • Tellurium (Te)
  • Thorium (Th)
  • Titanium (Ti)
  • Xenon (Xe)

Many of these elements use the first two letters to avoid a conflict with the first letter. For example, helium uses He because hydrogen already took H.

There are several elements that start with the same letter, and no element uses just the first letter. For example: actinium, aluminum, americium, and argon.

Xenon could have been X, or dysprosium could have been just D, but that’s not how it was done.

First letter and next consonant

The next largest group of elements are abbreviated by their first letter and the next consonant, skipping over a vowel.

  • Bohrium (Bh)
  • Cadmium (Cd)
  • Cesium (Cs)
  • Dubnium (Db)
  • Gadolinium (Gd)
  • Hafnium (Hf)
  • Hassium (Hs)
  • Livermorium (Lv)
  • Magnesium (Mg)
  • Manganese (Mn)
  • Meitnerium (Mt)
  • Neodymium (Nd)
  • Neptunium (Np)
  • Nihonium (Nh)
  • Niobium (Nb)
  • Rubidium (Rb)
  • Samarium (Sm)
  • Technetium (Tc)
  • Zinc (Zn)
  • Zirconium (Zr)

Many of these elements would cause a conflict if they had been abbreviated using one of the above rules. For example, cadmium could not be C because that’s carbon, and it could not be Ca because that’s calcium.

Initials of first two syllables

  • Astatine (At)
  • Berkelium (Bk)
  • Darmstadtium (Ds)
  • Einsteinium (Es)
  • Fermium (Fm)
  • Lawrencium (Lr)
  • Mendelevium (Md)
  • Moscovium (Mc)
  • Platinum (Pt)
  • Promethium (Pm)
  • Roentgenium (Rg)
  • Terbium (Tb)
  • Thallium (Tl)

Initials of first and third syllable

  • Californium (Cf)
  • Copernicium (Cn)
  • Palladium (Pd)
  • Rutherfordium (Rf)
  • Seaborgium (Sg)
  • Tennessine (Ts)
  • Ytterbium (Yb)

First and last letter

  • Curium (Cm)
  • Radon (Rn)
  • Thulium (Tm)

Miscellaneous

  • Arsenic (As)
  • Chromium (Cr)
  • Plutonium (Pu)
  • Protactinium (Pa)
  • Rhenium (Re)
  • Strontium (Sr)

Table

Update: Here’s a visualization of the categories above.

Periodic table of element abbreviations

Key to the groups above:

  1. First letter
  2. First two letters
  3. First letter and next consonant
  4. Initials of first and second syllables
  5. Initials of first and third syllables
  6. First and last letter
  7. First letter and something else
  8. Historical

Related posts

Oscillations in RLC circuits

Electrical and mechanical oscillations satisfy analogous equations. This is the basis of using the word “analog” in electronics. You could study a mechanical system by building an analogous circuit and measuring that circuit in a lab.

Mass, dashpot, spring

Years ago I wrote a series of four posts about mechanical vibrations:

Everything in these posts maps over to electrical vibrations with a change of notation.

That series looked at the differential equation

m u'' + \gamma u' + k u = F \cos\omega t

where m is mass, γ is damping from a dashpot, and k is the stiffness of a spring.

Inductor, resistor, capacitor

Now we replace our mass, dashpot, and spring with an inductor, resistor, and capacitor.

Imagine a circuit with an L henry inductor, and R ohm resistor, and a C farad capacitor in series. Let Q(t) be the charge in coulombs over time and let E(t) be an applied voltage, i.e. an AC power source.

Charge formulation

One can use Kirchhoff’s law to derive

Here we have the correspondences

\begin{align*} u &\leftrightarrow Q \\ m &\leftrightarrow L \\ \gamma &\leftrightarrow R \\ k &\leftrightarrow 1/C \end{align*}

So charge is analogous to position, inductance is analogous to mass, resistance is analogous to damping, and capacitance is analogous to the reciprocal of stiffness.

The reciprocal of capacitance is called elastance, so we can say elastance is proportional to stiffness.

Current formulation

It’s more common to see the differential equation above written in terms of current I.

I = \frac{dQ}{dt}

If we take the derivative of both sides of

we get

LI'' + RI' + \frac{1}{C} I = E'

Natural frequency

With mechanical vibrations, as shown here, the natural frequency is

\omega_0 = \sqrt{\frac{k}{m}}

and with electrical oscillations this becomes

\omega_0 = \frac{1}{\sqrt{LC}}

Steady state

When a mechanical or electrical system is driven by sinusoidal forcing function, the system eventually settles down to a solution that is proportional to a phase shift of the driving function.

To be more explicit, the solution to the differential equation

m u'' + \gamma u' + k u = F \cos\omega t

has a transient component that decays exponentially and a steady state component proportional to cos(ωt-φ). The same is true of the equation

LI'' + RI' + \frac{1}{C} I = E'

The proportionality constant is conventionally denoted 1/Δ and so the steady state solution is

\frac{F}{\Delta} \cos(\omega t - \phi)

for the mechanical case and

\frac{T}{\Delta} \cos(\omega t - \phi)

for the electrical case.

The constant Δ satisfies

\Delta^2 = m^2(\omega_0^2 -\omega^2)^2 + \gamma^2 \omega^2

for the mechanical system and

\Delta^2 = L^2(\omega_0^2 -\omega^2)^2 + R^2 \omega^2

for the electrical system.

When the damping force γ or the resistance R is small, then the maximum amplitude occurs when the driving frequency ω is near the natural frequency ω0.

More on damped, driven oscillations here.

How is portable AM radio possible?

The length of antenna you need to receive a radio signal is proportional to the signal’s wavelength, typically 1/2 or 1/4 of the wavelength. Cell phones operate at gigahertz frequencies, and so the antennas are small enough to hide inside the phone.

But AM radio stations operate at much lower frequencies. For example, there’s a local station, KPRC, that broadcasts at 950 kHz, roughly one megahertz. That means the wavelength of their carrier is around 300 meters. An antenna as long as a quarter of a wavelength would be roughly as long as a football field, and yet people listen to AM on portable radios. How is that possible?

looking inside a portable radio

There are two things going on. First, transmitting is very different than receiving in terms of power, and hence in terms of the need for efficiency. People are not transmitting AM signals from portable radios.

Second, the electrical length of an antenna can be longer than its physical length, i.e. an antenna can function as if it were longer than it actually is. When you tune into a radio station, you’re not physically making your antenna longer or shorter, but you’re adjusting electronic components that make it behave as if you were making it longer or shorter. In the case of an AM radio, the electrical length is orders of magnitude more than the physical length. Electrical length and physical length are closer together for transmitting antennas.

Here’s what a friend of mine, Rick Troth, said when I asked him about AM antennas.

If you pop open the case of a portable AM radio, you’ll see a “loop stick”. That’s the AM antenna. (FM broadcast on most portables uses a telescoping antenna.) The loop is tuned by two things: a ferrite core and the tuning capacitor. The core makes the coiled wiring of the antenna resonate close to AM broadcast frequencies. The “multi gang” variable capacitor coupled with the coil forms an LC circuit, for finer tuning. (Other capacitors in the “gang” tune other parts of the radio.) The loop is small, but is tuned for frequencies from 530KHz to 1.7MHz.
Loops are not new. When I was a kid, I took apart so many radios. Most of the older (tube type, and AM only) radios had a loop inside the back panel. Quite different from the loop stick, but similar electrical properties.
Car antennas don’t match the wavelengths for AM broadcast. Never have. That’s a case where matching matters less for receivers. (Probably matters more for satellite frequencies because they’re so weak.) Car antennas, whether whip from decades ago or embedded in the glass, probably match FM broadcast. (About 28 inches per side of a dipole, or a 28 inch quarter wave vertical.) But again, it does matter a little less for receive than for transmit.

In the photo above, courtesy Rick, the AM antenna is the copper coil on the far right. The telescoping antenna outside the case extends to be much longer physically than the AM antenna, even though AM radio waves are two orders of magnitude longer than FM radio waves.

 

Solar declination

This post expands on a small part of the post Demystifying the Analemma by M. Tirado.

Apparent solar declination given δ by

δ = sin-1( sin(ε) sin(θ) )

where ε is axial tilt and θ is the angular position of a planet. See Tirado’s post for details. Here I want to unpack a couple things from the post. One is that that declination is approximately

δ = ε sin(θ),

the approximation being particular good for small ε. The other is that the more precise equation approaches a triangular wave as ε approaches a right angle.

Let’s start out with ε = 23.4° because that is the axial tilt of the Earth. The approximation above is a variation on the approximation

sin φ ≈ φ

for small φ when φ is measured in radians. More on that here.

An angle of 23.4° is 0.4084 radians. This is not particularly small, and yet the approximation above works well. The approximation above amounts to approximating sin-1(x) with x, and Taylor’s theorem tells the the error is about x³/6, which for x = sin(ε) is about 0.01. You can’t see the difference between the exact and approximate equations from looking at their graphs; the plot lines lie on top of each other.

Even for a much larger declination of 60° = 1.047 radians, the two curves are fairly close together. The approximation, in blue, slightly overestimates the exact value, in gold.

This plot was produced in Mathematica with

    ε = 60 Degree
    Plot[{ε Sin[θ] ], ArcSin[Sin[ε] Sin[θ]]}, {θ, 0, 2π}]

As ε gets larger, the curves start to separate. When ε = 90° the gold curve becomes exactly a triangular wave.

Update: Here’s a plot of the maximum approximation error as a function of ε.

Related posts

When do two-body systems have stable Lagrange points?

The previous post looked at two of the five Lagrange points of the Sun-Earth system. These points, L1 and L2, are located on either side of Earth along a line between the Earth and the Sun. The third Lagrange point, L3, is located along that same line, but on the opposite side of the Sun.

L1, L2, and L3 are unstable, but stable enough on a short time scale to be useful places to position probes. Lagrange points are in the news this week because the James Webb Space Telescope (JWST), launched on Christmas Day, is headed toward L2 at the moment.

The remaining Lagrange points, L4 and L5, are stable. These points are essentially in Earth’s orbit around the Sun, 60° ahead and 60° behind Earth. To put it another way, they’re located where Earth will be in two months and where Earth was two months ago. The points L3, L4, and L5 form an equilateral triangle centered at the Sun.

Lagrange points more generally

Lagrange points are not unique to the Sun and Earth, but also holds for other systems as well. You have two bodies m1 and m2 , such as a star and a planet or a planet and a moon, and a third body, such as the JWST, with mass so much less than the other two that its mass is negligible compared to the other two bodies.

The L1, L2, and L3 points are always unstable, meaning that an object placed there will eventually leave, but the L4 and L5 points are stable, provided one of the bodies is sufficiently less massive than the other. This post will explore just how much less massive.

Mass ratio requirement

Michael Spivak [1] devotes a section of his physics book to the Trojan asteroids, asteroids that orbit the Sun at the L4 and L5 Lagrange points of a Sun-planet system. Most Trojan asteroids are part of the Sun-Jupiter system, but other planets have Trojan asteroids as well. The Earth has a couple Trojan asteroids of its own.

Spivak shows that in order for L4 and L5 to be stable, the masses of the two objects must satisfy

(m1m2) / (m1 + m2) > k

where m1 is the mass of the more massive body, m2 is the mass of the less massive body, and

k = √(23/27).

If we define r to be the ratio of the smaller mass to the larger mass,

r = m2 / m1,

then by dividing by m1 we see that equivalently we must have

(1 – r) / (1 + r) > k.

We run into the function (1 – z)/(1 + z) yet again. As we’ve pointed out before, this function is its own inverse, and so the solution for r is that

r < (1 – k) / (1 + k) = 0.04006…

In other words, the more massive body must be at least 25 times more massive than the smaller body.

The Sun is over 1000 times more massive than Jupiter, so Jupiter’s L4 and L5 Lagrange points with respect to the Sun are stable. The Earth is over 80 times more massive than the Moon, so the L4 and L5 points of the Earth-Moon system are stable as well.

Pluto has only 8 times the mass of its moon Charon, so the L4 and L5 points of the Pluto-Charon system would not be stable.

Related posts

[1] Michael Spivak: Physics for Mathematicians: Mechanics I. Addendum 10A.

Fraud, Sloppiness, and Statistics

A few years ago the scientific community suddenly realized that a lot of scientific papers were wrong. I imagine a lot of people knew this all along, but suddenly it became a topic of discussion and people realized the problem was bigger than imagined.

The layman’s first response was “Are you saying scientists are making stuff up?” and the response from the scientific community was “No, that’s not what we’re saying. There are subtle reasons why an honest scientist can come to the wrong conclusion.” In other words, don’t worry about fraud. It’s something else.

Well, if it’s not fraud, what is it? The most common explanations are sloppiness and poor statistical practice.

Sloppiness

The sloppiness hypothesis says that irreproducible results may be the result of errors. Or maybe the results are essentially correct, but the analysis is not reported in sufficient detail for someone to verify it. I first wrote about this in 2008.

While I was working for MD Anderson Cancer Center, a couple of my colleagues dug into irreproducible papers and tried to reverse engineer the mistakes and omissions. For example, this post mentioned some of the erroneous probability formulas that were implicitly used in journal articles.

Bad statistics

The bad statistics hypothesis was championed by John Ioannidis in his now-famous paper Most published research findings are false. The article could have been titled “Why most research findings will be false, even if everyone is honest and careful.” For a cartoon version of Ioannidis’s argument, see xkcd’s explanation of why jelly beans cause acne. In a nutshell, the widespread use of p-values makes it too easy to find spurious but publishable results.

Ioannidis explained that in theory most results could be false, based on statistical theory, but potentially things could be better in practice than in theory. Unfortunately they are not. Numerous studies have tried to empirically estimate [1] what proportion of papers cannot be reproduced. The estimate depends on context, but it’s high.

For example, ScienceNews reported this week on an attempt to reproduce 193 experiments in cancer biology. Only 50 of the experiments could be reproduced, and of those, the reported effects were found to be 85% smaller than initially reported. Here’s the full report.

Fraud

This post started out by putting fraud aside. In a sort of a scientific version of Halnon’s razor, we agreed not to attribute to fraud what could be adequately explained by sloppiness and bad statistics. But what about fraud?

There was a spectacular case of fraud in The Lancet last year.

article summary with RETRACTED stamped on top in red

The article was published May 22, 2020 and retracted on June 4, 2020. I forget the details, but the fraud was egregious. For example, if I remember correctly, the study claimed to have data on more than 100% of the population in some regions. Peer review didn’t catch the fraud but journalists did.

Who knows how common fraud is? I see articles occasionally that try to estimate it. But exposing fraud takes a lot of work, and it does not advance your career.

I said above that my former colleagues were good at reverse engineering errors. They also ended up exposing fraud. They started out trying to figure out how Anil Potti could have come to the results he did, and finally determined that he could not have. This ended up being reported in The Economist and on 60 Minutes.

As Nick Brown recently said on Twitter,

At some point I think we’re going to have to accept that “widespread fraud” is both a plausible and parsimonious explanation for the huge number of failed replications we see across multiple scientific disciplines.

That’s a hard statement to accept, but that doesn’t mean it’s wrong.

[1] If an attempt to reproduce a study fails, how do we know which one was right? The second study could be wrong, but it’s probably not. Verification is generally easier than discovery. The original authors probably explored multiple hypotheses looking for a publishable result, while the replicators tested precisely the published hypothesis.

Andrew Gelman suggested a thought experiment. When a large follow-up study fails to replicate a smaller initial study, image if the timeline were reversed. If someone ran a small study and came up with a different result than a previous large study, which study would have more credibility?

Aquinas on epicycles

C. S. Lewis quotes Thomas Aquinas in The Discarded Image:

In astronomy an account is given of eccentricities and epicycles on the ground that if their assumption is made the sensible appearances as regards celestial motion can be saved. But this is not a strict proof since for all we know they could also be saved by some different assumption.

Time dilation in SF and GPS

I’m reading Voyage to Alpha Centauri and ran into a question about relativity. The book says in one place that their ship is moving a 56.7% of the speed of light, and in another place it says that time moves about 20% slower for them relative to folks on Earth. Are those two statements consistent?

It wouldn’t bother me if they weren’t consistent. I ordinarily wouldn’t bother to check such things. But I remember looking into time dilation before and being surprised how little effect velocity has until you get very close to the speed of light. I couldn’t decide whether the relativistic effect in the novel sounded too large or too small.

If a stationary observer is watching a clock moving at velocity v, during one second of the observer’s time,

\sqrt{1 - \frac{v^2}{c^2}}

seconds will have elapsed on the moving clock.

Even at 20% of the speed of light, the moving clock only appears to slow down by about 2%.

If, as in the novel, a spaceship is moving at 56.7% of the speed of light, then for every second an Earth-bound observer experiences, someone on the ship will experience √(1 – 0.567²) = 0.82 seconds. So time would run about 20% slower on the ship, as the novel says.

The author must have either done this calculation or asked someone to do it for him. I had a science fiction author ask me for something a while back, though I can’t remember right now what it was.

Small velocities

You can expand the expression above in a Taylor series to get

\sqrt{1 - \frac{v^2}{c^2}} = 1 -\frac{v^2}{2c^2} -\frac{v^4}{8c^4} + \cdots

and so velocities much smaller than the speed of light, the effect of time dilation is 0.5 v²/c², a quadratic function of velocity. You can use this to confirm the comment above that when v/c = 0.2, the effect of time dilation is about 2%.

GPS satellites travel at about 14,000 km/hour, and so the effect of time dilation is on the order of 1 part in 1010. This would seem insignificant, except it amounts to milliseconds per year, and so it does make a practical difference.

For something moving 100 times slower, like a car, time dilation would be 10,000 times smaller. So time in a car driving at 90 miles per hour slows down by one part in 1014 relative to a stationary observer.

Tape measures

The math in the section above is essentially the same as the math in the post explaining why it doesn’t matter much if a tape measure does run exactly straight when measuring a large distance. They both expand an expression derived from the Pythagorean theorem in a Taylor series.

Martian gravity

There is a lot of talk about Mars right now, and understandably so. The flight of Ingenuity today was awesome. As Daniel Oberhaus pointed out on Twitter,

… the atmosphere on the surface of Mars is so thin that it’s the equivalent of flying at ~100k feet on Earth.

No rotorcraft, piloted or uncrewed, has ever broken 50k on Earth.

When I heard that gravity on Mars is about 1/3 of that of Earth, that sounded too small to me. My thinking was that gravity on the moon is about 1/6 of Earth, and Mars is much bigger than the moon, so gravity on Mars ought to be closer to gravity on Earth

Where I went wrong was my assessment that Mars is “much” bigger than the moon. The radius of Mars is only about twice that of our moon; I would have guessed higher.

Surface gravity is proportional to mass over radius squared. If the density of two balls is the same, then mass goes up like radius cubed, and so gravity would increase in proportion to radius. The density of Mars and the moon are about the same, and so the object with twice the radius has about twice the surface gravity.

Let’s put some numbers to things. We’ll let m and r stand for mass and mean radius. And we’ll let subscripts E, M, and L stand for Earth, Mars, and Luna (our moon).

rE = 6371 km
rM = 3390 km
rL = 1738 km

The radius of Mars is approximately the geometric mean of the radii of the Earth and the moon.

(rE rL)½ = 3327 ≈ 3390 = rM

To calculate surface gravity we’ll need masses [1].

mE = 5.972 × 1024 kg
mM = 6.417 × 1023 kg
mL = 7.342 × 1022 kg

The mass of Mars is also approximately the geometric mean of the masses of the Earth and the moon [2].

(mE mL)½ = 6.6 × 1023 ≈ 6.4× 1023 = mM

The ratio of Martian gravity to lunar gravity is

(mM / rM²) / (mL / rL²) = 2.2968

The ratio of Earth gravity to Martin gravity is

(mE / rE²) / (mM / rM²) = 2.6140

so saying surface gravity on Mars is a third of that on Earth underestimates gravity on Mars a little but not too much.

More Mars-related posts

[1] I’m assuming mass is uniformly distributed for each body. It’s not exactly, and this makes a difference if you’re planning satellite trajectories, but it doesn’t make much of a difference here.

[2] This is not a consequence of the relationship between the radii because the bodies have different densities. The moon and Mars have similar densities, but both are significantly less dense than Earth.