Elementary solutions to differential equations

Differential equations rarely have closed-form solutions. Some do, and these are emphasized in textbooks.

For this post we want to look specifically at homogeneous second order linear equations:

y ” + a(x) y‘ + b(x) y = 0.

If the coefficient functions a and b are constant, then the solution can be written down in terms of elementary functions, i.e. functions a first year calculus student would recognize. This would include, for example, polynomials, sines, and cosines, but would not include, the gamma function, Bessel functions, Airy functions, etc.

If the coefficients a and b are not constant, the differential equation usually does not have an elementary solution. In fact, you might wonder if it is ever possible in that case for the differential equation to have an elementary solution. Experience would suggest not.

A paper by Kovacic [1] thoroughly answers this question. The author gives algorithms for determining whether elementary solutions exist and how to find them if they do exist. The following example comes from that paper.

Consider the equation

y” + ry = 0

where [2]

r(x) = (4 x6 – 8 x5 + 12 x4 + 4 x3 + 7 x2 – 20 x + 4)/4 x4.

Then

y(x) = x-3/2 (x2 – 1) exp(x2/2 – x – 1/x)

is a solution, which the following Mathematica code verifies by evaluating to 0.

r[x_] := (4 x^6 - 8 x^5 + 12 x^4 + 4 x^3 + 7 x^2 - 20 x + 4)/(4 x^4)
f[x_] := x^(-3/2) (x^2 - 1) Exp[x^2/2 - x - 1/x]
Simplify[D[f[x], {x, 2}] - r[x] f[x]]

Related posts

[1] Jerald J. Kovacic. An algorithm for solving second order linear homogeneous differential equations. Journal of Symbolic Computation (1986) 2, 3–43.

[2] There’s an error in the paper, where the denominator of r is given as 4x rather than 4x4.

Finite rings

It occurred to me recently that I rarely hear about finite rings. I did a Google Ngram search to make sure this isn’t just my experience.

Finite group, finite ring, finite field ngram

Source

Why are finite groups and finite fields common while finite rings are not?

Finite groups have relatively weak algebraic structure, and demonstrate a lot of variety. Finite fields have very strong algebraic structure. Their complete classification has been known for a long time and is easy to state.

I imagine that most of the references to finite groups above have to do with classifying finite groups, and that most of the references to finite fields have to do with applications of finite fields, which are many.

You can see that references to finite groups hit their peak around the time of the Feit-Thompson theorem in 1962, and drop sharply after the classification of finite simple groups was essentially done in 1994. There’s a timeline of the progress toward the classification theorem on Wikipedia.

Rings have more structure than groups, but less structure than fields. Finite rings in particular are in a kind of delicate position: they easily become fields. Wedderburn’s little theorem says every finite domain is a field.

The classification of finite rings is much simpler than that of finite groups. And in applications you often want a finite field. Even if a finite ring (not necessarily a field) would do, you’d often use a finite field anyway.

In summary, my speculation as to why you don’t hear much about finite rings is that they’re not as interesting to classify as finite groups, and not as useful in application as finite fields.

Posts on finite simple groups

Posts on finite fields

Mixing error-correcting codes and cryptography

Secret codes and error-correcting codes have nothing to do with each other. Except when they do!

Error-correcting codes

Error correcting code make digital communication possible. Without some way to detect and correct errors, the corruption of a single bit could wreak havoc. A simple example of an error-detection code is check sums. A more sophisticated example would be erasure codes, a method used by data centers to protect customer data against hard drive failures or even entire data centers going offline.

People who work in coding theory are quick to point out that they do not work in cryptography. “No, not that kind of code. Error-correcting codes, not secret codes.” The goal isn’t secrecy. The goal is maximize the probability of correctly transmitting data while minimizing the amount of extra information added.

Codes and ciphers

You don’t hear the word “code” used in connection with cryptography much anymore. People used to refer to “codes and ciphers” in one breath. Historically, the technical distinction was that a code operated on words, while a cipher operated on characters. Codes in this sense have long been obsolete, but people still speak of codes colloquially.

David Kahn’s classic book on pre-modern cryptography is entitled The Codebreakers, not the Cipherbreakers, because the public at the time was more familiar with the term code than the term cipher. Maybe that’s still the case because, for example, Jason Fagone entitled his biography of Elizabeth Friedman The Woman Who Smashed Codes. Perhaps the author suggested The Woman Who Smashed Ciphers and an editor objected.

Code-based cryptography

If you’re accustomed to the older use of “codes,” the term “code-based cryptography” is redundant. But it means something specific in modern usage: cryptographic systems that incorporate error-correction codes. So error-correcting codes and secret “codes” do have something to do with each other after all!

Robert McEliece had this idea back in 1978. His encryption method starts with a particular error-correcting code, a binary Goppa code, and scrambles it with an invertible linear transformation. At a very high level, McEliece’s method boils down to a secret factorization, sorta like RSA but even more like oil and vinegar. The public key is the product of the Goppa code and the linear transformation, but only the owner knows the factorization of this key.

To encrypt a message with McEliece’s method, the sender adds a specific amount of random noise, noise that the Goppa code can remove. An attacker faces a challenging computational problem to recover the message without knowing how to factor the public key.

Post-quantum cryptography

McEliece’s method did not attract much interest at the time because it requires much larger public keys than other methods, such as RSA. However, there is renewed interest in McEliece’s approach because his scheme is apparently quantum-resistant whereas RSA and other popular public key systems are not.

If and when large quantum computers become practical, they could factor the product of large primes efficiently, and thus break RSA. They could also solve the discrete logarithm and elliptic discrete logarithm problems, breaking Diffie-Hellman and elliptic curve cryptosystems. All public key cryptosystems now in common use would be broken.

Why worry about this now while quantum computers don’t exist? (They exist, but only as prototypes. So far the largest number a quantum computer has been able to factor is 21.) The reason is that it takes a long time to develop, analyze, standardize, and deploy encryption methods. There’s also the matter of forward security: someone could store encrypted messages with the hope of decrypting them in the future. This doesn’t matter for cat photos transmitted over TLS, but it could matter for state secrets; governments may be encrypting documents that they wish to keep secret for decades.

NIST is sponsoring a competition to develop and standardize quantum-resistant encryption methods. Two months ago NIST announced the candidates that advanced to the second round. Seven of these methods use code-based cryptography, including the classic McEliece method and six variations: BIKE, HQC, LEDAcrypt, NTS-KEM, ROLLO, and RQC.

Related posts

US Army applying new areas of math

Many times on this blog I’ve argued that the difference between pure and applied math is motivation. As my graduate advisor used to say, “Applied mathematics is not a subject classification. It’s an attitude.”

Uncle Sam wants homotopy type theory

Traditionally there was general agreement regarding what is pure math and what is applied. Number theory and topology, for example, are pure, while differential equations and numerical analysis are applied.

But then public key cryptography and topological data analysis brought number theory and topology over into the applied column, at least for some people. And there are people working in differential equations and numerical analysis that aren’t actually interested in applications. It would be more accurate to say that some areas of math are more directly and more commonly applied than others. Also, some areas of math are predominantly filled with people interested in applications and some are not.

The US Army is interested in applying some areas of math that you would normally think of as very pure, including homotopy type theory (HoTT).

From an Army announcement:

Modeling frameworks are desired that are able to eschew the usual computational simplification assumptions and realistically capture … complexities of real world environments and phenomena, while still maintaining some degree of computational tractability. Of specific interest are causal and predictive modeling frameworks, hybrid model frameworks that capture both causal and predictive features, statistical modeling frameworks, and abstract categorical models (cf. Homotopy Type Theory).

And later in the same announcement

Homotopy Type Theory and its applications are such an area that is of significant interest in military applications.

HoTT isn’t the only area of math the Army announcement mentions. There are the usual suspects, such as (stochastic) PDEs, but also more ostensibly pure areas of math such as topology; the word “topological” appears 23 times in the document.

This would be fascinating. It can be interesting when a statistical model works well in application, but it’s no surprise: that’s what statistics was developed for. It’s more interesting when something finds an unexpected application, such as when number theory entered cryptography. The applications the Army has in mind are even more interesting because the math involved is more abstract and, one would have thought, less likely to be applied.

Related posts

A genius can admit finding things difficult

Karen Uhlenbeck

Karen Uhlenbeck has just received the Abel Prize. Many say that the Fields Medal is the analog of the Nobel Prize for mathematics, but others say that the Abel Prize is a better analog. The Abel prize is a recognition of achievement over a career whereas the Fields Medal is only awarded for work done before age 40.

I had a course from Karen Uhlenbeck in graduate school. She was obviously brilliant, but what I remember most from the class was her candor about things she didn’t understand. She was already famous at the time, having won a MacArthur genius award and other honors, so she didn’t have to prove herself.

When she presented the definition of a manifold, she made an offhand comment that it took her a month to really understand that definition when she was a student. She obviously understands manifolds now, having spent her career working with them.

I found her comment about extremely encouraging. It shows it’s possible to become an expert in something you don’t immediately grasp, even if it takes you weeks to grok its most fundamental concept.

Uhlenbeck wasn’t just candid about things she found difficult in the past. She was also candid about things she found difficult at the time. She would grumble in the middle of a lecture things like “I can never remember this.” She was not a polished lecturer—far from it—but she was inspiring.

Related posts

(The connection between Karen Uhlenbeck, Ted Odell, and John Tate is that they were all University of Texas math faculty.)

Photo of Karen Uhlenbeck in 1982 by George Bergman [GFDL], via Wikimedia Commons

Thermocouple polynomials and other sundries

I was looking up something on the NIST (National Institute of Standards and Technology) web site the other day and ran across thermocouple polynomials. I wondered what that could be, assuming “thermocouple” was a metaphor for some algebraic property. No, it refers to physical thermocouples. The polynomials are functions for computing voltage as a function of temperature, and temperature as a function of voltage, for a variety of types of thermocouples. See the NIST ITS-90 Thermocouple Database.

I keep running into NIST’s eclectic collection of useful information. Three examples:

I wonder what’s going to take me back to NIST next.

Counting irreducible polynomials over finite fields

You can construct a finite field of order pn for any prime p and positive integer n. The elements are polynomials modulo an irreducible polynomial of degree n, with coefficients in the integers mod p. The choice of irreducible polynomial matters, though the fields you get from any two choices will be isomorphic.

For example, the AES encryption algorithm uses the finite field GF(28), i.e. the finite field with 28 = 256 elements. Except we need to be a little careful about saying “the” field. Since we’re doing concrete calculations, the choice of irreducible polynomial matters, and AES dictates the polynomial

x8 + x4 + x3 + x + 1.

Another example from cryptography is Galois Counter Mode (GCM) which uses the finite field GF(2128), specifying the irreducible polynomial

x128 + x7 + x2 + x + 1.

How many other irreducible polynomials are there over GF(28) or any other field for that matter? We’ll assume the leading coefficient is 1, i.e. we’ll count monic polynomials, because otherwise we can just divide by the leading coefficient.

The number of monic irreducible polynomials of degree n over a field with q elements is given by

I_q(n) = \frac{1}{n} \sum_{d | n} \mu(d) q^{n/d}

where μ is the Möbius function and the sum is over all positive integers that divide n. We can implement this function succinctly in Python.

    from sympy import mobius, divisors

    def I_q(n, q):
        list = [mobius(d)*q**(n/d) for d in divisors(n)]
        return sum(list)//n

We can compute I_q(8, 2) to find out there are 30 monic irreducible polynomials of degree 8 with coefficients in GF(2), i.e. with one-bit coefficients. There are 256 monic polynomials—the coefficient of xk can be either 0 or 1 for k = 0 … 7—but only 30 of these are irreducible. Similarly, there are 2128 monic polynomials of degree 128 with binary coefficients, and 2121 of them are irreducible. Clearly it’s convenient in applications like GCM to use a polynomial of low weight, i.e. one with few non-zero coefficients.

Note that in the paragraph above we count the number of monic irreducible polynomials with coefficients in GF(2) that we could use in constructing GF(28). We haven’t considered how many monic irreducible polynomials there are in GF(28), i.e. with coefficients not just in GF(2) but in GF(28). That would be a much larger number. If we call I_q(8, 256) we get 2,305,843,008,676,823,040.

Average distance between planets

What is the closest planet to Earth?

The planet whose orbit is closest to the orbit of Earth is clearly Venus. But what planet is closest? That changes over time. If Venus is between the Earth and the sun, Venus is the closest planet to Earth. But if Mercury is between the Earth and the sun, and Venus is on the opposite side of the sun, then Mercury is the closest planet to Earth.

On average, Mercury is the closest planet to the Earth, closer than Venus! In fact, Mercury is the closest planet to every planet, on average. A new article in Physics Today gives a detailed explanation.

The article gives two explanations, one based on probability, and one based on simulated orbits. The former assumes planets are located at random points along their orbits. The latter models the actual movement of planets over the last 10,000 years. The results are agree to within 1%.

It’s interesting that the two approaches agree. Obviously planet positions are not random. But over time the relative positions of the planets are distributed similarly to if they were random. They’re ergodic.

My first response would be to model this as if the positions were indeed random. But my second thought is that maybe the actual motion of the planets might have resonances that keep the distances from being ergodic. Apparently not, or at least the deviation from being ergodic is small.

Related posts

All elliptic curves over fields of order 2 and 3

Introductions to elliptic curves often start by saying that elliptic curves have the form

y² = x³ + ax + b.

where 4a³ + 27b² ≠ 0. Then later they say “except over fields of characteristic 2 or 3.”

What does characteristic 2 or 3 mean? The order of a finite field is the number of elements it has. The order is always a prime or a prime power. The characteristic is that prime. So another way to phrase the exception above is to say “except over fields of order 2n or 3n.”

If we’re looking at fields not just of characteristic 2 or 3, but order 2 or 3, there can’t be that many of them. Why not just list them? That’s what I plan to do here. Continue reading