I recently got a review copy of Scientific Computing: A Historical Perspective by Bertil Gustafsson. I thought that thumbing through the book might give me ideas for new topics to blog about. It still may, but mostly it made me think of numerical methods I’ve *already* blogged about.

In historical order, or at least in the order of Gustafsson’s book:

- Fixed point iteration
- Newton’s method for root-finding
- Runge phenomenon
- Gauss quadrature
- Taylor series
- Fourier series
- Gibbs phenomenon
- Fourier transform
- Runge-Kutta
- Linear programming
- Nonlinear optimization
- Iterative linear solvers
- Natural cubic spline interpolation
- Wavelets
- Fast Fourier Transform (FFT)
- Singular value decomposition
- Monte Carlo methods
- MCMC

The links above include numerical methods I have written about. What about the methods I have *not* written about?

## PDEs

I like to write fairly short, self-contained blog posts, and I’ve written about algorithms compatible with that. The methods in Gustafsson’s book that I haven’t written about are mostly methods in partial differential equations. I don’t see how to write short, self-contained posts about numerical methods for PDEs. In any case, my impression is that not many readers would be interested. If you are one of the few readers who *would* like to see more about PDEs, you may enjoy the following somewhat personal rambling about PDEs.

I studied PDEs in grad school—mostly abstract theory, but also numerical methods—and expected PDEs to be a big part of my career. I’ve done some professional work with differential equations, but the demand for other areas, particularly probability and statistics, has been far greater. In college I had the impression that applied math was practically synonymous with differential equations. I think that was closer to being true a generation or two ago than it is now.

My impression of the market demand for various kinds of math is no doubt colored by my experience. When I was at MD Anderson we had one person in biomathematics working in PDEs, and then none when she left. There may have been people at MDACC doing research into PDEs for modeling cancer, but they weren’t in the biostatistics or biomathematics departments. I know that people are working with differential equations in cancer research, but not at the world’s largest cancer center, and so I expect there aren’t many doing research in that area.

Since leaving MDACC to work as a consultant I’ve seen little demand for differential equations, more in Kalman filters than in differential equations per se. A lot of companies hire people to numerically solve PDEs, but there don’t seem to be many who want to use a consultant for such work. I imagine most companies with an interest in PDEs are large and have a staff of engineers and applied mathematicians working together. There’s more demand for statistical consulting because companies are likely to have an occasional need for statistics. The companies that need PDEs, say for making finite element models of oil reservoirs or airplanes, have an ongoing need and hire accordingly.

What about applications to options pricing? I have never seen so many PDE(s) as when I considered problems to do with options chains (example is optimal exercise of options contracts, still a lot of unsolved problems with no analytical solutions). Just my two-cents.

Anyway, love your blog. Thanks for all the insight.

Thanks, Matthew. A lot of money just go into financial applications of stochastic DEs, but I don’t think much of it is spent with consultants. If I’m wrong, I should go after a new market. :)

From an engineering perspective, control loops (and coupled systems in general) are rife with PDEs. Of course, my primary goal as a software developer was to run away from them as fast as possible, and instead find an approximation that permitted “good enough” solutions to be generated “fast enough” (in my area of sensor development for real-time instrumentation).

Quite often the PDEs themselves were too complex to reduce to closed form or even identify from the data when too many factors were involved. Most of the useful PDEs in our R&D were encountered when creating theoretical sensor models, where they were used to guide the design of sensors having the best odds of actually sensing the desired signal.

While I was good enough at the math, I preferred to stay as close as possible to the data. So I cheated: I’d use SVD/PCA to find a strong correlation, identify its source, then model it with low-order polynomials to carefully remove as much as possible, and repeat until the desired signal dominated the resulting data.

For me, the real key to avoid having to solve PDEs was getting good lab data, which consisted of the combination of “clean” stimulus and signals combined with a good analysis strategy, both of which are developed by application of the “Design of Experiments” discipline.

Of course, even the best experiments can fail to help solve the problem, where I’d be stuck at: “I know the signal is in there, but I can’t isolate it!” At which point I’d gather my data and toss it over the wall to the physicists and materials scientists. Let them wrangle with the PDEs. Or, as most often happened, improve their model based on the data, then redesign the physical sensor to diminish the “unanticipated coupling”.

Unfortunately, sometimes we had to give up when we realized our “sensor” was best modeled as a random noise source. Which happened about 25% of the time. Another 50% of the time the signal wasn’t as good as that provided by existing sensors (most often, our new sensor simply had no stable calibration). Another 20% of the time our fantastic new sensor couldn’t be manufactured at an acceptable cost.

The final 5% was what we lived for: A new sensor that outperformed everything else, which we could integrate into a profitable instrument and system. For example, in one project the heart of the sensor was a single FET the size of an entire silicon wafer. We didn’t dare patent it because it wasn’t really “new” in any way other than size. The profit from that one sensor paid for years of the 95% failures.

And, no, I’m not going to tell you what we sensed with it.

I think you hit the nail on the head – we engineers rely on established software to solve PDEs when we are looking to numerically solve a problem. In my line of work, if we need to pull a consultant in to help solve issues with PDEs, we more often than not call the the software company to either help us through the licencing agreement we have or hore them as a sub-consultant. When we do hire a consultant like yourself to help with the math, it mainly has to do with dealing with large amounts of data in an efficient way that we do not have the tools for yet. Recently, we put a former employee on retainer to help with such things as he has found a niche in that market, providing a lot of skills that you blog about here.

It’s already been said, but I too love this blog even if 95% of the info goes over my head. I’m in it for that 5% that expands my knowledge base and helps me solve the problems of the future, or those pesky ones that eluded me in the past.

All the best!

Thanks, Charlie!

To echo what Charlie said, much PDE-related work may have been made obsolete by computers. Before electronic computing, even solving *linear* equations could be a career. Consider the first job of Konrad Zuse, a computing pioneer:

“After graduating from a technical college [in 1935], he got a job as a stress analyst for an aircraft company in Berlin, solving linear equations that incorporated all sorts of load and strength and elasticity factors. Even using mechanical calculators, it was almost impossible for a person to solve in less than a day more than six simultaneous linear equations with six unknowns. If there were twenty-five variables, it could take a year. So Zuse, like so many others, was driven by the desire to mechanize the tedious process of solving mathematical equations.” (Walter Isaacson, *The Innovators*, p. 52)