Approximating a solution that doesn’t exist

The following example made an impression on me when I first saw it years ago. I still think it’s an important example, though I’d draw a different conclusion from it today.

Problem: Let y(t) be the solution to the differential equation y‘ = t2 + y2 with y(0) = 1. Calculate y(1).

If we use Euler’s numerical method with a step size h = 0.1, we get y(1) = 7.19. If we reduce the step size to 0.05 we get y(1) = 12.32. If we reduce the step size further to 0.01, we get y(1) = 90.69. That’s strange. Let’s switch over to a more accurate method, 4th order Runge-Kutta. With a step size 0.1 the Runge-Kutta method gives 735.00, and if we use a step size of 0.01 we get a result larger than 1015. What’s going on?

The problem presupposes that a solution exists at t = 1 when in fact no solution exists. General theory (Picard’s theorem) tells that a unique solution exists for some interval containing 0, but it does not tell us how far that interval extends. With a little work we can show that a solution exists for t at least as large as π/4. However, the solution becomes unbounded somewhere between π/4 and 1.

When I first saw this example, my conclusion was that it showed how important theory is. If you just go about numerically computing solutions without knowing that a solution exists, you can think you have succeeded when you’re actually computing something that doesn’t exist. Prove existence and uniqueness before computing. Theory comes first.

Now I think the example shows the importance of the interplay between theory and numerical computation. It would be nice to know how big the solution interval is before computing anything, but that’s not always possible. Also, it’s not obvious from looking at the equation that there should be a problem at t = 1. The difficulties we had with numerical computation suggested there might be a theoretical problem.

I first saw this problem in an earlier edition of Boyce and DiPrima. The book goes on to approximate the interval over which the solution does exist using a combination of analytical and numerical methods. It looks like the solution becomes unbounded somewhere near t = 0.97.

I wouldn’t say that theory or computation necessarily come first. I’d say you iterate between them, starting with the approach that is more tractable. Theoretical results are  more satisfying when they’re available, but theory often doesn’t tell us as much as we’d like to know. Also, people make mistakes in theoretical computation just as they do in numerical computation. It’s best when theory and numerical work validate each other.

The problem does show the importance of being concerned with existence and uniqueness, but theoretical methods are not the only methods for exploring existence. Good numerical practice, i.e. trying more than one step size or more than one numerical method, is also valuable. In any case, the problem shows that without some diligence — either theoretical or numerical — you could naively compute an approximate “solution” where no solution exists.

RelatedConsulting in differential equations

7 thoughts on “Approximating a solution that doesn’t exist

  1. I thought the article below had an interesting comment about theory vs. practice from an engineering standpoint.

    http://www.americanheritage.com/articles/magazine/it/1997/3/1997_3_20.shtml

    Q: You were developing transonic theory after the sound barrier had already been broken. Hasn’t much of your historical study also involved engineering problems that were “solved” in a practical sense before they were understood theoretically?

    A: Yes, and I think that’s a typical situation in technology. You have to look hard to find cases in which the theory is well worked out before the practice. Look at the steam engine and thermodynamics; that whole vast science got started because people were trying to explain and calculate the performance of the reciprocating steam engines that had been built.

  2. While the term thermodynamics came about after the first engines, we already had a lot of theory worked out before. Pressure was a well known concept. So, I wouldn’t go so far as to say that “practical application” predates theory.

    I suspect it is more of an entanglement… theory without practice is barren… practice without theory is crude…

  3. Very nice illustration! Just to finish off the scholarship, the Boyce and DiPrima (first) edition I have (INTRODUCTION TO DIFFERENTIAL EQUATIONS, 1970, SBN 471-09338-6) puts the illustration of this problem in section 7.7, out at pages 277-278. It is interesting it is dumped in a section introducing numerical methods rather than being placed far ahead, in the Introduction. That Introduction uses Newton’s Second Law as a start, and follows with the general form of ODEs. I think this shows, in part, how heavily computation has influenced our collective view of this material.

    To balance, however, I’d say there are some algorithms and processes which give answers with phenomena and data theory is quite hopeless at dicing. (Think Navier-Stokes.) Sure, they have and need parts to be robustly built, and existence solutions are critical. And, sure, people — including me — could do with more careful attention to theoretical underpinnings in nearly everything I do. I try. We try. But, for example, there’s a lot of good engineering that can be done with Laplace transforms that doesn’t really need an understanding of the proof of Lerch’s Theorem. I’d say that’s good!

  4. Note that the location of the singularity is given by t(1), where t(w) is the solution of the initial value problem
    t'(w) = 1/((1-w)^2*t(w)^2+1), t(0) = 0.
    The value of t(1) = 0.96981072 can easily be found numerically.

    The solution to the original problem can be written using Bessel functions.

  5. There is no problem at t=1. You can continue the exact solution involving Bessel functions (easily obtained using, say, Mathematica), past the (first) singularity (which is at 0.930564508526…), and find that y(1)=-14.379106277…

  6. There are some ways you could figure out something funky was going on with numerical methods, though.

    For instance, if you used an adaptive step-size scheme, you would find that the necessary step size to maintain the desired precision would become exceedingly small, and once it began to exceed machine precision your program would have to throw up some sort of error to that effect. At that point you could see that the solution starts to blow up and furthermore you would at least have a nice lower bound for the domain of existence for the solution.

  7. It’s easy to bypass the singularity near t = 0.97 using numerical methods. One can e.g. consider u(t) = 1/y(t). This satisfies the differential equation:
    u’ = -y’/y^2 = – (t^2 + y^2)/y^2 = -(1+t^2/y^2) = -(1 + t^2 u^2)
    Using 4th order Runge Kutta and step sizes of 0.05 and 0.025 and Richardson extrapolation one finds the result:

    y(1) = 1/u(1) = -33.114355617

    The location of the singularity is the location of the zero of u(t). We can find this by considering the inverse function of u(t): We have

    dt/du = 1/u’ = -1/(1+t^2 u^2)

    We can then integrate this numerically with the initial condition t(1) = 0 using 4th order Runge Kutta to find t(0) with step size of -0.05 and -0.025 and extrapolate to find the result:

    t = 0.969810653927

Comments are closed.