This post is about the word problem. When mathematicians talk about “the word problem” we’re not talking about elementary math problems expressed in prose, such as “If Johnny has three apples, ….”

The word problem in algebra is to decide whether two strings of symbols are equivalent given a set of algebraic rules. I go into a longer description here.

I know *that* the word problem is hard, but I hadn’t given much thought to *why* it is hard. It has always seemed more like a warning sign than a topic to look into.

“Why don’t we just …?”

“That won’t work because … is equivalent to the word problem.”

In logic terminology, the word problem is **undecidable**. There can be no algorithm to solve the word problem in general, though there are algorithms for special cases.

So *why* is the word problem undecidable? As with most (all?) things that have been shown to be undecidable, the word problem is undecidable because the halting problem is undecidable. If there were an algorithm that could solve the word problem in general, you could use it to solve the halting problem. The halting problem is to write a program that can determine whether another arbitrary program will always terminate, and it can be shown that no such program can exist.

The impulse for writing this post came from stumbling across Keith Conrad’s expository article Why word problems are hard. His article goes into some of the algebraic background of the problem—free groups, (in)finitely generated groups, etc.—and gives some flavor for why the word problem is harder than it looks. The bottom line is that the word problem is hard because the halting problem is hard, but Conrad’s article gives you a little more of an idea what’s going on that just citing a logic theorem.

I still don’t have a cocktail-party answer for why the word problem is hard. Suppose a bright high school student who had heard of the word problem were at a cocktail party (drinking a Coke, of course) and asked why the word problem is hard. Suppose also that this student had not heard of the halting problem. Would the simplest explanation be to start by explaining the halting problem?

Suppose we change the setting a bit. You’re teaching a group theory course and the students know about groups and generators, but not about the halting problem, how might you give them a feel for why the word problem is hard? You might ask them to read Keith Conrad’s paper and point out that it shows that simpler problems than the word problem are harder than they seem at first.

Here’s my cocktail party summary:

In a general rewriting system, rules can expand as well as reduce. If you can expand without bound, you may be able to go into an infinite expansive loop searching for a rewrite path to equality between words.

You can’t be sure when to stop, so you never find an answer.

Marc: I like that, and I would expand it a little and draw a parallel to the Collatz problem (which most people know, or can have explained to them in a few minutes). In short, there’s no upper bound to how far you have to go up before you start coming back down… which means you’re never allowed to give up, it could always be in the next place you look!

(Of course, the Collatz problem *might* be solvable, but as long as our present state of knowledge doesn’t give us any way to prove that certain numbers escape, or that none of them do, the parallel is there).