I had a discussion recently about whether things are really continuous in the real world. Strictly speaking, maybe not, but practically yes. The same is true of all mathematical properties. There are no circles in the real world, not in the Platonic sense of a mathematical circle. But a circle is a very useful abstraction, and plenty of things are circles for practical purposes. In this post I’ll explain the typical definition of continuity and a couple of modifications for application.
A function f is continuous if nearby points go to nearby points. A discontinuity occurs when some points are close together but their images are far apart, such as when you have a threshold effect. For example, suppose you pay $3 to ship packages that weigh less than a pound and $5 to ship packages that weight a pound or more. Two packages can weigh almost the same, one a little less than a pound and one a little more, but not cost almost the same to ship. The difference in their shipping cost is $2, no matter how close together they are, as long as their on opposite sides of the one-pound threshold.
A practical notion of continuity has some idea of resolution. Suppose in our example that packages below one pound shipped for $3.00 and packages that weigh a pound or more ship for $3.05. You might say “I don’t care about differences of a nickle.” And so at that resolution, the shipping costs are continuous.
The key to understanding continuity is being precise about the two uses of “nearby” in the statement that a continuous function sends nearby points to nearby points. What do you mean by nearby? How close is close enough? In the pure mathematical definition of continuity, the answer is “as close as you want.” You specify any tolerance you like, no matter how small, and call it ε. If for any ε someone picks, it’s always possible to specify a small enough neighborhood around x that those points are mapped within ε of f(x), then f is continuous at x.
For applications, we modify this definition by putting a lower bound on ε. A function is continuous at x, for the purposes of a particular application, if for every ε larger than the resolution of the problem, you can find a neighborhood of x so small that all the points in that neighborhood are mapped within ε of f(x). In our shipping example, if you only care about differences in rates larger than $1, then if the rates change by $0.05 at the one-pound threshold, the rates are continuous as far as you’re concerned. But if the rates jump by $2 at one pound, then the rates are discontinuous for your purposes.
When you see a smooth curve on a high-resolution screen, it’s continuous as far as your vision is concerned. Nearby points on the curve go to points that are nearby as far as the resolution of your vision is concerned, even though strictly speaking the curve could have jump discontinuities at every pixel.
If you take a function that is continuous in the pure mathematical sense, then any multiple of that function is also continuous. If you make the function 10x larger, you just need to get closer to each point x to find a neighborhood so that all points get within ε of f(x). But in practical application, a multiple of a continuous function might not be continuous. If your resolution on shipping rates is $1, and the difference in cost between shipping a 15 ounce and a 17 ounce package is $0.05, then it’s continuous for you. But if the rates were suddenly 100x greater, now you care, because now the difference in cost is $5.
With the example of the curve on a monitor, if you were to zoom in on the image, at some point you’d see the individual pixels and so the image would no longer be continuous as far as your vision is concerned.
We’ve been focusing on what nearness means for the output, ε. Now let’s focus on nearness for the input. Introducing a restriction on ε let us say some functions are continuous for a particular application that are not continuous in the pure mathematical sense. We can also introduce a restriction on the resolution on the input, call it δ, so that the opposite is true: some functions are continuous in the pure mathematical sense that are not continuous for a particular application.
The pure mathematical definition of continuity of f at x is that for every ε > 0, there exists a δ > 0 such that if |x − y| < δ, then |f(x) − f(y)| < ε. But how small does δ have to be? Maybe too small for application. Maybe points would have to be closer together than they can actually be in practice. If a function changes rapidly, but smoothly, then it’s continuous in the pure mathematical sense, but it may be discontinuous for practical purposes. The more rapidly the function changes, the smaller δ has to be for points within δ of x to end up within ε of f(x).
So an applied definition of continuity would look something like this.
A function f is continuous at x, for the purposes of a particular application, if for every ε > the resolution of the problem, there exists a δ > the lower limit for that application, such that if |x − y| < δ, then |f(x) − f(y)| < ε.
That’s a very useful definition of practical continuity, but I think it undermines your initial statement of “Strictly speaking, maybe not, but practically yes.” Many things with sensitive dependence on initial conditions are likely to be practically discontinuous — e.g. weather a week from now, or predator/prey populations a decade from now, etc.