Technological boundary layer

The top sides of your ceiling fan blades are dusty because of a boundary layer effect. When the blades spin, a thin layer of air above the blades moves with the blades. That’s why the fan doesn’t throw off the dust.

A mathematical model may have very different behavior in two large regions, with a thin region of rapid transition between the two, such as the transition between the ceiling fan blade and the circulating air in the room. That transition region is called a boundary layer. In this post I want to use the term boundary layer metaphorically.

Some information you use so frequently that you memorize it without trying. And some information you use so infrequently that it’s not worth memorizing it.

There’s not much need to deliberately memorize how software tools work because that information mostly falls into one of the two categories above. Either you use the tool so often that you remember how to use it, or you use the tool so rarely that it’s best to look up how to use it just-in-time.

But there’s a thin region between these two categories: things you use often enough that it’s annoying to keep looking up how to use them, but not so often that you remember the details. In this technological boundary layer, it might be worthwhile to deliberately memorize and periodically review how things work. Maybe this takes the form of flashcards [1] or exercises.

This boundary layer must be kept very small. If you spend more than a few minutes a day on it, you’re probably taking on too much. YMMV.

Things may move in and out of your technological boundary layer over time. Maybe you use or review something so much that it moves into long-term memory. Or maybe you use it less often and decide to let it slip into things you look up as needed.

Related posts

[1] Paper or electronic? In my experience, making paper flashcards helps me memorize things, even if I don’t review them. But I’ve also used the Anki software before and found it helpful.

Mentally compute cosine

Ronald Doerfler presents a simple approximation for the cosine of an angle in degrees in his book Dead Reckoning.

cos d° ≈ 1 − d(d + 1)/7000

The approximation is good to about three digits for angles between 0° and 40°.

Although cosine is an even function, the approximation above is not, and the error is larger for negative angles. But there’s no need to use negative values since cos(-d) = cos(d). Just compute the cosine of the absolute value of the angle.

The simplest thing to do would have been to use a Taylor approximation, but Taylor approximations don’t work well over a large interval; they’re excessively accurate near the middle and not accurate enough at the ends. The error at 40° would have been 100x larger. Also, the Taylor approach would have required multiplying or dividing by a more mentally demanding number than 7000.

If we wanted to come up with a quadratic approximation for a computer rather than for a human, we could squeeze out more accuracy. The error curve over the interval [0, 40] dips down significantly more than it goes up. An optimal approximation would go up and down about the same amount.

Related posts

Oscillatory differential equations

The solution to a differential equation is called oscillatory if its set of zeros is unbounded. This does not necessarily mean that the solution is periodic, but that it crosses the horizontal axis infinitely often.

Fowler [1] studied the following differential equation demonstrates both oscillatory and nonoscillatory behavior.

x” + tσ |x|γ sgn x = 0

He proved that

  1. If σ + 2 ≥ 0 then all solutions oscillate.
  2. If σ + (γ + 3)/2 < 0 then no solutions oscillate.
  3. If neither (1) nor (2) holds then some solutions oscillate and some do not.

The edge case is σ = -2 and γ = 1. This satisfies condition (1) above. A slight decrease in σ will push the equation into condition (2). And a simultaneous decrease in σ and increase in γ can put it in condition (3).

It turns out that the edge case can be solved in closed form. The equation is linear because γ = 1 and solutions are given by

ct sin(φ + √3 log(t) / 2).

The solutions oscillate because the argument to sine is an unbounded function of t, and so it runs across multiples of π infinitely often. The distance between zero crossings increases exponentially with time because the logarithms of the crossings are evenly spaced.

This post took a different turn after I started writing it. My original intention was to solve the differential equation numerically and say “See: when you change the parameters slightly you get what the theorem says.” But that didn’t work out.

First I tried numerically computing the solution above with σ = −2 and γ = 1, but I didn’t see oscillations. I tried making σ a little larger, but still didn’t see oscillations. When I increased σ more I could see the oscillations.

I was able to go back and see the oscillations with σ = −2 and γ = 1 by tweaking the parameters in my ODE solver. It’s best practice to tweak your parameters even if everything looks right, just to make sure your solution is robust to algorithm changes. Maybe I would have done that, but since I already knew the analytic solution, I knew to tweak the parameters until I saw the right behavior.

Without some theory as a guide, it would be difficult to determine from numerical solutions alone whether a solution oscillates. If you see oscillations, then they’re probably real (unless your numerical method is unstable), but if you don’t see oscillations, how do you know you won’t see them if you look further out? How much further out should you look? In a practical application, the context of your problem will tell you how far out to look.

My intention was to recommend the equation above as an illustration for an introductory DE class because it demonstrates the important fact that small changes to an equation can qualitatively change the behavior of the solutions. This is especially true for nonlinear DEs, which the above equation is when γ ≠ 1. It could still be used for that purpose, but it would make a better demonstration than homework exercise. The instructor could tune the DE solver to make sure the solutions are qualitatively correct.

I recommend the equation as an illustration, but for a course covering numerical solutions to DEs. It illustrates a different point than I first intended, namely the potentially finicky behavior of software for solving differential equations.

There are a couple morals to this story. One is that numerical methods have not eliminated the need for theory. The ideal is to combine analytic and numerical methods. Analytic methods point out qualitative behavior that might be hard to discover numerically. And analytic methods are not likely to produce closed-form solutions for nonlinear DEs.

Another moral is that it’s best to twiddle the parameters to your numerical method to make sure the solution doesn’t qualitatively change. If you’ve adequately computed a solution, computing it again with smaller integration steps shouldn’t make much difference. But if you do see a substantial difference, your first solution probably wasn’t as good as you thought.

Related posts

[1] R. H. Fowler. Further studies of Emden’s and similar differential equations. Quarterly Journal of Mathematics. 2 (1931), pp. 259–288.