Just-in-case versus just-in-time

What do you learn just in case you’ll need it in the future, and what do you learn just in time when you do need it?

In general, you learn things in school just in case you’ll need them later. Then once you get a job, you learn more things just as you need them.

When you learn just-in-time, you’re highly motivated. There’s no need to imagine whether you might apply what you’re learning since the application came first. But you can’t learn everything just-in-time. You have to learn some things before you can imagine using them. You need to have certain patterns in your head before you can recognize them in the wild.

Years ago someone told me that he never learned algebra and has never had a need for it. But I’ve learned algebra and use it constantly. It’s a lucky thing I was the one who learned algebra since I ended up needing it! But of course it’s not lucky. I would not have had any use for it either if I’d not learned it.

The difference between just-in-case and just-in-time is like the difference between training and trying. You can’t run a marathon by trying hard. The first person who tried that died. You have to train for it. You can’t just say that you’ll run 26 miles when you need to and do nothing until then.

Software developers prefer just-in-time learning. There’s so much out there that you aren’t going to need. You can’t learn every detail of every operating system, every programming language, every library etc. before you do any real work. You can only remember so much arbitrary information without a specific need for it. Even if you could learn it all in the abstract, you’d be decades into your career without having produced anything. On top of that, technological information has a short shelf life, so it’s not worthwhile to learn too much that you’re not sure you have a need for.

On the other hand, you need to know what’s available, even if you’re only going to learn the details just-in-time. You can’t say “I need to learn about version control systems now” if you don’t even know what version control is. You need to have a survey knowledge of technology just in case. You can learn APIs just-in-time. But there’s a big gray area in between where it’s hard to know what is worthwhile to learn and when.

Related posts

p-values are inconsistent

If there’s evidence that an animal is a bear, you’d think there’s even more evidence that it’s a mammal. It turns out that p-values fail this common sense criterion as a measure of evidence.

I just ran across a paper of Mark Schervish1 that contains a criticism of p-values I had not seen before. p-values are commonly used as measures of evidence despite the protests of many statisticians. It seems reasonable that a measure of evidence would have the following property. If a hypothesis H implies another hypothesis H‘, then evidence in favor of H’ should be at least as great as evidence in favor of H.

Here’s one of the examples from Schervish’s paper. Suppose data come from a normal distribution with variance 1 and unknown mean μ. Let H be the hypothesis that μ is contained in the interval (-0.5, 0.5). Let H‘ be the hypothesis that μ is contained in the interval (-0.82, 0.52). Then suppose you observe x = 2.18. The p-value for H is 0.0502 and the p-value for H‘ is 0.0498. This says there is more evidence to support the hypothesis H that μ is in the smaller interval than there is to support the hypothesis H‘ that μ is in the larger interval. If we adopt α = 0.05 as the cutoff for significance, we would reject the hypothesis that -0.82 < μ < 0.52 but accept the hypothesis that -0.5 < μ < 0.5. We’re willing to accept that we’ve found a bear, but doubtful that we’ve found a mammal.

1 Mark J. Schervish. “P values: What They Are and What They Are Not.” The American Statistician, August 1996, Vol. 50, No. 3.

Update: I added the details of the p-value calculation here.

Related posts