Screwtape on music and silence

From Screwtape, the senior demon of The Screwtape Letters:

Music and silence — how I detest them both! … no square inch of infernal space and no moment of infernal time has been surrendered to either of those abominable forces, but all has been occupied by Noise — Noise, the grand dynamism, the audible expression of all that is exultant, ruthless, and virile … We will make the whole universe a noise in the end. We have already made great strides in that direction as regards the Earth. The melodies and silences of Heaven will be shouted down in the end. But I admit we are not yet loud enough, or anything like it. Research is in progress.

Related posts

Decentralized knowledge, centralized power

Arnold Kling argues in his interview on EconTalk that knowledge is becoming more decentralized while power is becoming more centralized. Therefore more decisions will be made by people who don’t know what they’re doing.

His strongest point is that knowledge is being decentralized. Jobs have become more specialized, academic disciplines have become more narrow, people have become more interdependent, etc. It’s harder to defend a blanket statement that power is becoming more centralized. Kling gives important examples of power consolidation, but one could also give examples of an opposite trend. It would be easier to argue that at least in some contexts power is becoming more centralized.

If in some context power is becoming centralized while knowledge is being decentralized, it is inevitable that more decisions will be made without adequate knowledge. This sounds like a breeding ground for a sort of antibiotic-resistant strain of the Peter Principle.

Related posts

Why drugs often list headache as a side-effect

In an interview on Biotech Nation, Gary Cupit made an offhand remark about why so many drugs list headache as a possible side-effect: clinical trial participants are often asked to abstain from coffee during the trial. That also explains why those who receive placebo often complain of headache as well.

Cupit’s company Somnus Therapeutics makes a sleep medication for people who have no trouble going to sleep but do have trouble staying asleep. The medication has a timed-release so that it is active only in the middle of the night when needed. One of the criteria by which the drug is evaluated is whether there is a lingering effect the next morning. Obviously researchers would like to eliminate coffee consumption as a confounding variable. But this contributes to the litany of side-effects that announcers must mumble in television commercials.

Related: Adaptive clinical trial design

 

Two useful asymptotic series

This post will present a couple asymptotic series, explain how to use them, and point to some applications.

The most common special function in applications, at least in my experience, is the gamma function Γ(z). It’s often easier to work with the logarithm of the gamma function than to work with the gamma function directly, so one of the asymptotic series we’ll look at is the series for log Γ(z).

Another very common special function in applications is the error function erf(z). The error function and its relationship to the normal probability distribution are explained here. Even though the gamma function is more common, we’ll start with the asymptotic series for the error function because it is a little simpler.

Error function

We actually present the series for the complementary error function erfc(z) = 1 – erf(z). (Why have a separate name for erfc when it’s so closely related to erf? Sometimes erfc is easier to work with mathematically. Also, sometimes numerical difficulties require separate software for evaluating erf and erfc as explained here.)

\sqrt{\pi} z e^{z^2} \mbox{erfc}(z) \sim 1 + \sum_{m=1}^\infty (-1)^m \frac{(2m-1)!!}{(2z^2)^{m}}

If you’re unfamiliar with the n!! notation, see this explanation of double factorial.

Note that the series has a squiggle ~ instead of an equal sign. That is because the partial sums of the right side do not converge to the left side. In fact, the partial sums diverge for any z. Instead, if you take any fixed partial sum you obtain an approximation of the left side that improves as z increases.

The series above is valid for any complex value of z as long as |arg(z)| < 3π/4. However, the error term is easier to describe if z is real. In that case, when you truncate the infinite sum at some point, the error is less than the first term that was left out. In fact, the error also has the same sign as the first term left out. So, for example, if you drop the sum entirely and just keep the “1” term on the right side, the error is negative and the absolute value of the error is less than 1/2 z2.

One way this series is used in practice is to bound the tails of the normal distribution function. A slight more involved application can be found here.

Log gamma

The next series is the asymptotic series for log Γ(z).

\log \Gamma(z) \sim (z - \frac{1}{2}) \log z - z + \frac{1}{2} \log(2\pi) + \frac{1}{12z} - \frac{1}{360z^3} + \cdots

If you throw out all the terms involving powers of 1/z you get Stirling’s approximation.

As before, the partial sums on the right side diverge for any z, but if you truncate the series on the right, you get an approximation for the left side that improves as z increases. And as before, the series is valid for complex z but the error is simpler when z is real. In this case, complex z must satisfy |arg(z)| < π. If z is real and positive, the approximation error is bounded by the first term left out and has the same sign as the first term left out.

The coefficients 1/12, 1/360, etc. require some explanation. The general series is

\log \Gamma(z) \sim (z - \frac{1}{2}) \log z - z + \frac{1}{2}\log 2\pi + \sum_{m=1}^\infty \frac{B_{2m}}{2m(2m-1) z^{2m-1}}

where the numbers B2m are Bernoulli numbers.

This post showed how to use this asymptotic series to compute log n!.

References

The asymptotic series for the error function is equation 7.1.23 in Abramowitz and Stegun. The bounds on the remainder term are described in section 7.1.24. The series for log gamma is equation 6.1.41 and the error term is described in 6.1.42.

Subtle variation on gaining weight to become taller

Back in March I wrote a blog post asking whether gaining weight makes you taller. Weight and height are clearly associated, and from that data alone one might speculate that gaining weight could make you taller. Of course causation is in the other direction: becoming taller generally makes you gain weight.

In the 1980’s, cardiologists discovered that patients with irregular heart beats for the first 12 days following a heart attack were much more likely to die. Antiarrythmic drugs became standard therapy. But in the next decade cardiologist discovered this was a bad idea. According to Philip Devereaux, “The trial didn’t just show that the drugs weren’t saving lives, it showed they were actually killing people.”

David Freedman relates the story above in his book Wrong. Freedman says

In fact, notes Devereaux, the drugs killed more Americans than the Vietnam War did—roughly an average of forty thousand a year died from the drugs in the United States alone.

Cardiologists had good reason to suspect that antiarrythmic drugs would save lives. In retrospect, it may be that heart-attack patients with poor prognosis have arrhythmia rather than arrhythmia causing poor prognosis. Or it may be that the association is more complicated than either explanation.

Related: Adaptive clinical trial design

 

Bug in SciPy’s erf function

Last night I produced the plot below and was very surprised at the jagged spike. I knew the curve should be smooth and strictly increasing.

My first thought was that there must be a numerical accuracy problem in my code, but it turns out there’s a bug in SciPy version 0.8.0b1. I started to report it, but I saw there were similar bug reports and one such report was marked as closed, so presumably the fix will appear in the next release.

The problem is that SciPy’s erf function is inaccurate for arguments with imaginary part near 5.8. For example, Mathematica computes erf(1.0 + 5.7i) as -4.5717×1012 + 1.04767×1012 i. SciPy computes the same value as -4.4370×1012 + 1.3652×1012 i. The imaginary component is off by about 30%.

Here is the code that produced the plot.

from scipy.special import erf
from numpy import linspace, exp
import matplotlib.pyplot as plt

def g(y):
    z = (1 + 1j*y) /  sqrt(2)
    temp = exp(z*z)*(1 - erf(z))
    u, v = temp.real, temp.imag
    return -v / u

x = linspace(0, 10, 101)
plt.plot(x, g(x))

Two kinds of multitasking

People don’t task switch like computers do.

The earliest versions of Windows and Mac OS used cooperative multitasking. A Windows program would do some small unit of work in response to a message and then relinquish the CPU to the operating system until the program got another message. That worked well, as long as all programs were written with consideration for other programs and had no bugs. An inconsiderate (or inexperienced) programmer might do too much work in a message handling routine and monopolize the CPU. A bug resulting in an infinite loop would keep the program from ever letting other programs run.

Now desktop operating systems use preemptive multitasking. Unix used this form of multitasking from the beginning. Windows starting using preemptive multitasking with Windows NT and Windows 95. Macintosh gained preemptive multitasking with OS X. The operating system preempts programs to tell them it’s time to give another program a turn with the CPU. Programmers don’t have to think about handing over control of the CPU and so programs are easier to write. And if a program runs into an infinite loop, it only hurts itself.

Computers work better with preemptive multitasking, but people work better with cooperative multitasking.

If you want to micro-manage people, if you don’t trust them and want to protect yourself against their errors, treat them like machines. Interrupt them whenever you want. Preemptive task switching works great for machines.

But people take more than a millisecond to regain context. (See Mary Czerwinski’s comments on context re-acquisition.) People do much better if they have some control over when they stop one thing and start another.

Related post: Inside the multitasking and marijuana study