Abelian groups of order 2025

Every finite Abelian group can be written as the direct sum of cyclic groups of prime power order.

To find the number of Abelian groups of order 2025 we have to find the number of ways to partition the factors of 2025 into prime powers.

Now 2025 = 34 × 52.

We can partition 34 into prime powers 5 ways:

  • 34
  • 33 × 3
  • 32 × 32
  • 32 × 3 × 3
  • 3 × 3 × 3 × 3

And we can partition 52 two ways:

  • 52
  • 5 × 5

A couple of notes here. First, we only consider positive powers. Second, two partitions are considered the same if they consist of the same factors in a different order. For example, 3 × 3 × 32 and 32 × 3 × 3 are considered to be the same partition.

It follows that we can partition 2025 into prime powers 10 ways: we choose one of the five ways to partition 34 and one of the two ways to partition 52. Here are all the Abelian groups of order 2025:

  • 81 ⊕ ℤ25
  • 81 ⊕ ℤ5 ⊕ ℤ5
  • 27 ⊕ ℤ3 ⊕ ℤ25
  • 27 ⊕ ℤ3 ⊕ ℤ5 ⊕ ℤ5
  • 9 ⊕ ℤ9 ⊕ ℤ25
  • 9 ⊕ ℤ9 ⊕ ℤ5 ⊕ ℤ5
  • 9 ⊕ ℤ3 ⊕ ℤ3 ⊕ ℤ25
  • 9 ⊕ ℤ3 ⊕ ℤ3 ⊕ ℤ5 ⊕ ℤ5
  • 3 ⊕ ℤ3 ⊕ ℤ3 ⊕ ℤ3 ⊕ ℤ25
  • 3 ⊕ ℤ3 ⊕ ℤ3 ⊕ ℤ3 ⊕ ℤ5 ⊕ ℤ5

Given a prime number q, there are as many ways to partition qk as the product of positive prime powers as there are ways to partition k into the sum of positive integers, denoted p(k). What we have shown above is that the number of Abelian groups of order 34 52 equals p(4) p(2).

In general, to find the number of Abelian groups of order n, factor n into prime powers, then multiply the partition numbers of the exponents in the factorization.

Related posts

Cycle of New Year’s Days

Here’s a visualization of how the day of the week for New Year’s Day changes.

The green diamonds represent leap years and the blue squares represent ordinary years.

The day of the week for New Year’s Day advances one day after each ordinary year and two days after each leap year, hence the diagonal stripes in the graph above.

The whole cycle repeats every 28 years. During that 28 year cycle, New Year’s Day falls on each day of the week four times: three times in an ordinary year and once in a leap year. Or to put it another way, each horizontal row of the graph above contains three blue squares and one green diamond.

The comments above are true under the Julian calendar, without exception. And they’re true for long stretches of time under the Gregorian calendar. For example, the pattern above repeats from 1901 to 2099.

The Julian calendar had a leap day every four years, period. This made the calendar year longer than the solar year by about 3 days every 400 years, so the Gregorian calendar removed 3 leap days. A year divisible by 100 is not a leap year unless it is also divisible by 400. So the Gregorian calendar disrupts the pattern above every 100 years.

Related posts

Details of generating primes for cryptography

RSA public key cryptography begins by finding a couple large primes. You essentially do this by testing random numbers until you find primes, but not quite.

Filippo Valsorda just posted a good article on this.

Suppose you’re looking for a 1024-bit prime number. You generate random 1024-bit numbers and then test until you find one that’s prime. You can immediately make this process twice as fast by setting the last bit to 1: it would be wasteful to generate a new number every time you happened to draw an even number.

A little less obvious is that it’s common to set the top bit as well. When you’re generating a number between 0 and 21024 − 1, it’s possible that you could generate a small number. It’s possible that you generate a very small number, like 59, but extremely unlikely. But it’s not so unlikely that you’d generate a number on the order of 21020, for example. By setting the top bit, you know you’re generating a number between 21023 and 21024.

Most composite numbers have small factors, so you check for divisibility by 3, 5, 7 etc. before running more time-consuming tests. Probabilistic tests are far more efficient than deterministic tests, so in practice everyone uses probable primes in RSA. For details of how you apply these tests, and how many tests to run, see Filippo Valsorda’s article.

Should you be concerned about setting the top bit of prime candidates? There are naive and sophisticated reasons not work worry, and an intermediate reason to at least think about it.

The naive response is that you’re just losing one bit of randomness. How much could that hurt? But in other contexts, such as losing one bit of randomness in an AES key, people do worry about such losses.

The bits in the prime factors of an RSA modulus do not correspond directly to bits of security. A 2048-bit modulus, the product of two 1024-bit primes, has about 112 bits of security. (See NIST SP 800-57.) You could set several bits in your prime factors before losing a bit of security. If this bothers you, move up to using a 3072-bit modulus rather than worrying that you 2048-bit modulus is in a sense a 2046-bit modulus.

More cryptography posts

What exactly is a second?

The previous post looked into the common definition of Unix time as “the number of seconds since January 1, 1970 GMT” and why it’s not exactly true. It was true for a couple years before we started inserting leap seconds. Strictly speaking, Unix time is the number of non-leap seconds since January 1, 1970.

This leads down the rabbit hole of how a second is defined. As long as a second is defined as 1/86400 th of a day, and a day is the time it takes for the earth to rotate once on its axis, there’s no cause for confusion. But when you measure the rotation of the earth precisely enough, you can detect that the rotation is slowing down.

Days are getting longer

The rotation of the earth has been slowing down for a long time. A day was about 23½ hours when dinosaurs roamed the earth, and it is believed a day was about 4 hours after the moon formed. For most practical purposes a day is simply 24 hours. But for precision science you can’t have the definition of a second changing as the ball we live on spins slower.

This lead to defining the second in terms of something more constant than the rotation of the earth, namely the oscillations of light waves, in 1967. And it lead to tinkering with the calendar by adding leap seconds starting in 1972.

Cesium

You’ll hear that the second is defined in terms of vibrations of a cesium atom. But what exactly does that mean? What about the atom is vibrating? The second is not defined in terms of motions inside an atom, but by the frequency of the radiation produced by changes in an atom. Specifically, a second has been defined since 1967 as

the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom.

Incidentally, “cesium” is the American English spelling of the name of atomic element 55, and “caesium” is the IUPAC spelling.

The definition of a second raises several questions. Why choose cesium? Why choose that number of periods? And what are hyperfine levels and all that? I’ll attempt to answer the first two questions and punt on the third.

OK, so why cesium? Do we use cesium because that’s what atomic clocks use? And if so, why do atomic clocks use cesium?

As I understand it, the first atomic clocks were based on cesium, though now some atomic clocks are based on hydrogen or rubidium. And one reason for using Cs 133 was that it was easy to isolate that particular isotope with high purity.

Backward compatibility

So why 9,192,631,770 periods? Surely if we’d started from this definition we’d go with something like 10,000,000,000 periods. Clearly the number was chosen for backward compatibility with the historical definition of a second, but that only approximately settles the question. The historical definition was fuzzy, which was the point of the new definition, so which variation on the historical definition was used for backward compatibility?

The time chosen for backward compatibility was basically the length of the year 1900. Technically, the number of periods was chosen so that a second would be

the fraction 1/31556925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time.

Here “tropical year” means the time it took earth to orbit the sun from the perspective of the “fixed stars,” i.e. from a vantage point so far away that it doesn’t matter exactly how far away it is. The length of a year varies slightly, and that’s why they had to pick a particular one.

The astronomical definition was just motivation; it has been discarded now that 9,192,631,770 periods of a certain radiation is the definition. We would not change the definition of a second if an alien provided us some day a more accurate measurement of the tropical year 1900.

Unix Time and a Modest Proposal

The time it takes earth to orbit the sun is not a simple multiple of the time it takes earth to rotate on its axis. The ratio isn’t even constant. The time it takes earth to circle the sun wobbles a bit, and the rotation of the earth is slowing down slightly.

The ratio is around 365.2422. Calling it 365 is too crude. Julius Caesar said we should call it 365 1/4, and that was good enough for a while. Then Pope Gregory said we really should use 365 97/400, and that’s basically good enough, but not quite. More on that here.

Leap seconds

In 1972 we started adding leap seconds in order to synchronize the day and the year more precisely. Unlike leap days, leap seconds don’t occur on a strict schedule. A leap second is inserted when a committee of astronomers decides one should be inserted, about every two years.

An international standards body has decided to stop adding leap seconds by 2035. They cause so confusion that it was decide that letting the year drift by a few seconds was preferable.

Unix time

Unix time is the number of seconds since the “epoch,” i.e. January 1, 1970, sorta.

If you were to naively calculate Unix time for this coming New Year’s Day, you’d get the right result.

New Year’s Day 2025

When New Year’s Day 2025 begins in England, the Unix time will be

(55 × 365 + 14) × 24 × 60 × 60 = 1735689600

This because there are 55 years between 1970 and 2025, 14 of which were leap years.

However, that moment will be 1735689627 seconds after the epoch.

Non-leap seconds

Unix time is the number of non-leap seconds since 1970-01-01 00:00:00 UTC. There have been 27 leap seconds since 1970, and so Unix time is 27 seconds behind elapsed time.

Leap year analogy

You could think of a second in Unix time as being 1/86400 th of a day. Every day has 86400 non-leap seconds, but some days have had 86401 seconds. A leap second could potentially be negative, though this has not happened yet. A day containing a negative leap second would have 86399 seconds.

The analogous situation for days would be to insist that every year have 365 days. February 28 would be the 59th day of the year, and March 1 would be the 60th day, even in a leap year.

International Atomic Time

What if you wanted a time system based on the actual number of elapsed seconds since the epoch? This is almost what International Atomic Time is.

International Atomic Time (abbreviated TAI, from the French temps atomique international) is ahead of UTC [1] by 37 seconds, not 27 seconds as you might expect. Although there have been 27 leap seconds since 1972, TAI dates back to 1958.

So New Year’s Day will start in England at 2025-01-01 00:00:37 TAI.

A Modest Proposal

It seems our choices are to add leap seconds and endure the resulting confusion, or not add leap seconds and allow the year to drift with respect to the day. There is a third way: adjust the position of the earth periodically to keep the solar year equal to an average Gregorian calendar day. I believe this modest proposal [2] deserves consideration.

Kepler’s law says the square of a planet’s orbital period is proportional to the cube of the semi-major axis of its orbit. This means that increasing earth’s orbital period by 1 second would only require moving earth 3.16 km further from the sun.

***

[1] UTC stands for Universal Coordinated Time. From an earlier post,

The abbreviation UTC is an odd compromise. The French wanted to use the abbreviation TUC (temps universel coordonné) and the English wanted to use CUT (coordinated universal time). The compromise was UTC, which doesn’t actually abbreviate anything.

[2] In case you’re not familiar with the term “modest proposal,” it comes from the title of a satirical essay by Jonathan Swift. A modest proposal has come to mean an absurdly simple solution to a complex problem presented satirically.

Most popular posts of 2024

I looked at Hacker News to see which posts on this site were most popular. I didn’t look at my server logs, but generally the posts that get the most traffic are posts that someone submits to Hacker News.

Older posts popular this year

Two posts written earlier got a lot of traffic this year, namely

Writes large correct programs

from 2008 and

Where has all the productivity gone?

from 2021.

Posts written this year

The most popular post this year, at least on Hacker News, was

Why does FM sound better than AM?

The runner up was

Evaluating a class of infinite sums in closed form

The following post looks at a way for a satellite to move from one orbit to another that under some circumstances is more efficient (in terms of fuel, not in terms of time) than the more common Hohmann transfer maneuver.

Efficiently transferring to a much higher orbit

This post considers interpolation as a form of compression. Instead of saving a table of function values at fine-grained intervals, you could store values at points further apart and store interpolation formulas for recovering the lost precision.

Compression and interpolation

One of the arguments between Frequentist and Bayesian statisticians is whether you should be allowed to look at data as it accrues during an experiment, such as in A/B testing. If you do look at the interim data, how should you analyze it and how should you interpret the results?

Can you look at experimental results along the way or not?

Finally, I wrote a post about solving a problem I ran into with the command line utility find. As is often the case, I got a lot of useful feedback.

Resolving a mysterious problem with find

 

Series for the reciprocal of the gamma function

Stirling’s asymptotic series for the gamma function is

\Gamma(z) \sim (2\pi)^{1/2} z^{z - 1/2} e^{-z} \sum_{n=0}^\infty (-1)^n \frac{\gamma_n}{z^n}

Now suppose you’d like to find an asymptotic series for the function 1/Γ(z).

Since the series for Γ has the form f(z) times an infinite sum, it would make sense to look for a series for 1/Γ of the form 1/f(z) times an infinite sum. The hard part would be finding the new infinite sum. In general the series for function and the series for its reciprocal look nothing alike.

Here’s where we have a pleasant surprise: the coefficients in the series for 1/Γ are exactly the same as the coefficients in the series for Γ, except the signs don’t alternate.

\frac{1}{\Gamma(z)} \sim (2\pi)^{-1/2} z^{-z +1/2} e^{z} \sum_{n=0}^\infty \frac{\gamma_n}{z^n}

Illustration

The following is not a proof, but it shows that the result is at least plausible.

Define Γ* to be Γ divided by the term in front of the infinite series:

\Gamma^*(z) = (2\pi)^{-1/2} z^{-z +1/2} e^{z} \Gamma(z)

Then the discussion above claims that Γ* and 1/Γ* have the same asymptotic series, except with alternating signs on the coefficients. So if we multiply the first few terms of the series for Γ* and 1/Γ* we expect to get something approximately equal to 1.

Now

\Gamma^*(z) = 1 + \frac{1}{12z} + \frac{1}{288z^2} - \frac{139}{51840z^3} - \cdots

and we claim

\frac{1}{\Gamma^*(z)} = 1 - \frac{1}{12z} + \frac{1}{288z^2} + \frac{139}{51840z^3} - \cdots

So if we multiply the terms up to third order we expect to get 1 and some terms involving powers of z in the denominator with exponent greater than 3. In fact the product equals

1 + \frac{571}{1244160 z^4} -\frac{19321}{2687385600 z^6}

which aligns with our expectations.

Starlink configurations

My nephew recently told me about being on a camping trip and seeing a long line of lights in the sky. The lights turned out to be Starlink satellites. It’s fairly common for people report seeing lines of these satellites.

Four lights in the sky in a line

Why would the satellites be in a line? Wouldn’t it be much more efficient to spread them out? They do spread out, but they’re launched in groups. Satellites released into orbit at the same time initially orbit in a line close together.

It would seem the optimal strategy would be to spread communication satellites out evenly in a sphere. There are several reasons why that is neither desirable or possible. It is not desirable because human population is far from evenly distributed. It’s very nice to have some coverage over the least-populated places on earth, such as Antarctica, but there is far more demand for service over the middle latitudes.

It is not possible to evenly distribute more than 20 points on a sphere, and so it would not be possible to spread out thousands of satellites perfectly evenly. However there are ways to arbitrarily many points somewhat evenly, such as in a Fibonacci lattice.

It’s also not possible to distribute satellites in a static configuration. Unless a satellite is in geostationary orbit, it will constantly move relative to the earth. One problem with geostationary orbit is that it is at an altitude of 42,000 km. Starlink satellites are in low earth orbit (LEO) between 300 km and 600 km altitude. It is less expensive to put satellites into LEO and there is less latency bouncing signals off satellites closer to the ground.

Satellites orbit at different altitudes, and altitude and velocity are tightly linked. You want satellites orbiting at different altitudes to avoid collisions, they’re orbiting at different velocities. Even if you wanted all satellites to orbit at the same altitude, this would require constant maintenance due to various real-world departures from ideal Keplerian conditions. Satellites are going to move around relative to each other whether you want them to or not.

Related posts

Putting a face on a faceless account

I’ve been playing around with Grok today, logging into some of my X accounts and trying out the prompt “Draw an image of me based on my posts.” [1] In most cases Grok returned a graphic, but sometimes it would respond with a text description. In the latter case asking for a photorealistic image made it produce a graphic.

Here’s what I get for @AlgebraFact:

The icons for all my accounts are cerulean blue dots with a symbol in the middle. Usually Grok picks up on the color, as above. With @AnalysisFact, it dropped a big blue piece of a circle on the image.

For @UnixToolTip it kept the & from the &> in the icon. Generative AI typically does weird things with text in images, but it picked up “awk” correctly.

Here’s @ProbFact. Grok seems to think it’s a baseball statistics account.

Last but not least, here’s @DataSciFact.

I wrote a popular post about how to put Santa hats on top of symbols in LaTeX, and that post must have had an outsided influence on the image Grok created.

[1] Apparently if you’re logging into account A and ask it to draw B, the image will be heavily influence by A‘s posts, not B‘s. You have to log into B and ask in the first person.

Can AI models reason: Just a stochastic parrot?

OpenAI has just released its full o1 model—a new kind of model that is more capable of multi-step reasoning than previous models. Anthropic, Google and others are no doubt working on similar products. At the same time, it’s hotly debated in many quarters whether AI models actually “reason” in a way similar to humans.

Emily Bender and her colleagues famously described large language models as nothing more than “stochastic parrots“—systems that simply repeat their training data blindly, based on a statistical model, with no real understanding (reminiscent of the Chinese Room experiment). Others have made similar comments, describing LLMs as “n-gram models on steroids” or a “fancy extrapolation algorithm.

There is of course some truth to this. AI models sometimes generate remarkable results and yet lack certain basic aspects of understanding that might inhibit their sometimes generation of nonsensical results. More to the point of “parroting” the training data, recent work from Yejin Choi’s group has shown how LLMs at times will cut and paste snippets from its various training documents, almost verbatim, to formulate its outputs.

Are LLMs (just) glorified information retrieval tools?

The implication of these concerns is that an LLM can “only” repeat back what it was taught (albeit with errors). However this view does not align with the evidence. LLM training is a compression process in which new connections between pieces of information are formed that were not present in the original data. This is evidenced both mathematically and anecdotally. In my own experience, I’ve gotten valid answers to such obscure and detailed technical question that it is hard for me to believe would exist in any training data in exactly that form. Whether you would call this “reasoning” or not might be open to debate, but regardless of what you call it, it is something more than just unadorned information retrieval like a “stochastic parrot.”

What is your experience? Let us know in the comments.