When I interviewed Daniel Spielman at this year’s Heidelberg Laureate Forum, we began our conversation by looking for common mathematical ground. The first thing that came up was orthogonal polynomials. (If you’re wondering what it means for two polynomials to be orthogonal, see here.)
JC: Orthogonal polynomials are kind of a lost art, a topic that was common knowledge among mathematicians maybe 50 or 100 years ago and now they’re obscure.
DS: The first course I taught I spent a few lectures on orthogonal polynomials because they kept coming up as the solutions to problems in different areas that I cared about. Chebyshev polynomials come up in understanding solving systems of linear equations, such as if you want to understand how the conjugate gradient method behaves. The analysis of error correcting codes and sphere packing has a lot of orthogonal polynomials in it. They came up in a course in multi-linear algebra I had in grad school. And they come up in matching polynomials of graphs, which is something people don’t study much anymore. … They’re coming back. They come up a lot in random matrix theory. … There are certain things that come up again and again and again so you got to know what they are.
* * *
More from my interview with Daniel Spielman:
“common knowledge among mathematicians maybe 50 or 100 years ago and now they’re obscure.”
Datapoint: We studied them in Berkeley in 1988. They are part of the standard syllabus for Numerical Analysis, a required class for both “pure” and applied math majors. Please tell me that I am not that old.
I studied them as part of my theoretical physics degree in Sheffield, UK back in 1997. The course was called something like ‘special functions in physics’ or similar.
It was not a course that people normally took. It was an option for ‘normal’ physicists and compulsory for the small number of theoreticians.
I don’t think I’ve used them much over the years but the course remains in my mind as one I greatly enjoyed.