Definition of faith

C. S. Lewis wrote that

Faith is holding on to things your reason has once accepted, in spite of your changing moods.

When someone says “I can’t believe it’s Tuesday” he really means that he does believe it’s Tuesday, but it takes effort. His emotions are telling him that it is some other day, but he chooses to accept that it is Tuesday for other reasons.

It takes faith for me to believe that men walked on the moon in 1969. I’m convinced that it happened, but it doesn’t seem true to me. It doesn’t seem plausible that 1960’s technology could have accomplished this, even though I know that it did.

It takes faith for me to believe that Ernest Shackleton and his crew survived their exploration of the Antarctic. I don’t doubt the historical accounts, though they are hard to believe.

It takes faith for me to believe some mathematical theorems even though I have carefully gone through every line of their proofs. I am convinced that these theorems are true though they do not seem true. Other mathematicians have commented on the same experience. For example, Jerry Bona once joked that

The Axiom of Choice is obviously true; the Well Ordering Principle is obviously false; and who can tell about Zorn’s Lemma?

The three statements he mentions are logically equivalent, though the Axiom of Choice is the easiest to believe and the Well Ordering Principle is the hardest to believe.

It takes faith for me to believe in God. At times it doesn’t feel like God exists, though there are reasons to believe that He does. I have found these reasons convincing, and I hold on to my conclusions in spite of my changing moods.

Related posts:

Simple approximation to normal distribution

Here’s a simple approximation to the normal distribution I just ran across. The density function is

f(x) = (1 + cos(x))/2π

over the interval (-π, π). The plot below graphs this density with a solid blue line. For comparison, the density of a normal distribution with the same variance is plotted with a dashed orange line.

PDF plots

The approximation is good enough to use for teaching. Students may benefit from doing an exercise twice, once with this approximation and then again with the normal distribution. Having an approximation they can integrate in closed form may help take some of the mystery out of the normal distribution.

The approximation may have practical uses. The agreement between the PDFs isn’t great. However, the agreement between the CDFs (which is more important) is surprisingly good. The maximum difference between the two CDFs is only 0.018. (The differences between the PDFs oscillate, and so their integrals, the CDFs, are closer together.)

I ran across this approximation here. It goes back to the 1961 paper “A cosine approximation to the normal distribution” by D. H. Raab and E. H. Green, Psychometrika, Volume 26, pages 447-450.

Update 1: See the paper referenced in the first comment. It gives a much more accurate approximation using a logistic function. The cosine approximation is a little simpler and may be better for teaching. However, the logistic approximation has infinite support. That could be an advantage since students might be distracted by the finite support of the cosine approximation.

The logistic approximation for the standard normal CDF is

F(x) = 1/(1 + exp(-0.07056 x3 – 1.5976 x))

and has a maximum error of 0.00014 at x = ± 3.16.

Update 2: How might you use this approximation the other way around, approximating a cosine by a normal density? See Always invert.

Related posts

For daily posts on probability, follow @ProbFact on Twitter.

ProbFact twitter icon

Daily tip winner and statistics

Giveaway winner

Nicholas Dunn is the winner of the Twitter daily tip giveaway. A coveted RegexTip coffee mug is on its way. Update (5 May 2010): Nick posted a photo of his mug.

photo of coffee mug with @RegexTip logo

Follower statistics

I ran some statistics on the daily tip sites. The full data are available here.

There are 1406 unique followers across the seven daily tip accounts. There are 2572 followers if you count multiple follows.

Most of the patterns in the data were predictable. For example, people who follow one math account are likely to follow another math account. But those who follow SansMouse for Windows keyboard shortcuts are not so interested in the math accounts.

Here’s one surprise: 84% of TopologyFact followers also follow AlgebraFact, but only 31% of AlgebraFact followers follow TopologyFact. This may be in part because TopologyFact is newer; I suspect those who follower the newer accounts are more aware of the older accounts than vice versa.

Related links

Here’s the full list of my daily tip accounts.

Here are a few similar accounts that other people maintain:

Chauffeurs and Ferraris revisited

About a year ago I wrote a post Would you rather have chauffeur or a Ferrari? commenting on an analogy by Dan Bricklin.

Fictional computers such as the HAL 9000 from 2001: A Space Odyssey were envisioned as chauffeurs. You tell the computer what to do and then go along passively for the ride. Bricklin says it looks like people would rather have a Ferrari than a chauffeur. We want our computers to be powerful tools, but we want to be actively involved in using them.

I’ve been thinking about how this applies to software development tools. “Wizards” that generate code are chauffeurs. So are software frameworks. You can go along passively for the ride, inserting your code where the wizard or the framework tells you to. You don’t have to know the route. You can concentrate on “higher level” goals.

It can be a relief to let someone else be responsible for the driving. But after a while you can get the feeling the chauffeur isn’t taking you where you want to go. In fact, you’re being kidnapped. That’s what happened with the HAL 9000 and that’s often what happens with wizards and frameworks.

A good software library is more like a Ferrari. It allows you to do what you want, but you remain in charge. Unfortunately, libraries have a tendency to require wizards and morph into frameworks. When the entropy of a library increases, writing  a wizard is easier than improving the library. When it becomes difficult for the wizards to paper over the complexity — make sure you run this wizard before you run that wizard, etc. — the library may become a framework.

So what’s the solution? To write everything yourself from scratch? No, that’s more like walking than driving a Ferrari. Mike Taylor’s article Tangled Up in Tools gives some ideas of how to write libraries that are more like Ferraris than chauffeurs. As he says, “the problem isn’t libraries, it’s bad libraries.” He says a good library is well documented and minimizes the “radius of comprehension,” the amount of context you need to understand to use each feature. That’s a good start.

Related posts:

Freebies and entitlement

Take away a freebie and people will hate you.

The latest EconTalk podcast relates a story of people who harbored a grudge against the Red Cross for decades. What did the Red Cross do that was so bad? They sold doughnuts at cost.

The Red Cross had given soldiers doughnuts for a while. Then at some point they started charging a nickle. They were not making a profit, only selling the doughnuts at cost. And they only started charging because the U. S. Army asked them to. Even so, some veterans and their families remained angry about this for many years. To this day, some Red Cross workers bring free doughnuts to meetings trying make up for hard feelings.

If you give away something but make it clear from the beginning that it’s only free temporarily — a free sample, a trial version, etc. — then you may charge money later without causing resentment. But if people ever get the idea that your product will remain free, they feel entitled to it.

If Facebook, for example, decided to charge even $1 a year for an account, they would lose millions of members. People would burn Mark Zuckerberg in effigy. Presumably they could have charged $1 a year without criticism when they started. But since the service has been free, they can never charge for it without creating enormous ill will.

Did automobile accidents decrease?

Here’s a homework problem from a class I taught:

… In past years, the average number of accidents per year was 15, and this year it was 10. Is it justified to claim that the accident rate has dropped?

The naive answer is of course the rate has dropped. Ten is less than fifteen. This reminds me of a joke attributed to Abraham Lincoln:

Q: If you call a tail a leg, how many legs does a dog have?

A: Four. Calling a tail a leg doesn’t make it one.

But the homework problem isn’t asking whether ten is less than fifteen. Part of the purpose of the exercise is to state the problem well. The real question is whether there is good evidence that the fundamental causes of automobile accidents have changed for the better or whether there is a fair chance that a random fluctuation caused the decrease. It turns out the latter is the case given the model (Poisson) that the question suggests using.

Think of this example next time you hear politicians say that some measure improved during their administration: economic growth, employment, crime rates, etc. The basic question is whether in fact the measure changed. The next question is whether the change was more likely a coincidence or a genuine improvement. And if there was a real improvement, ask whether the politician deserves any credit.

(The homework exercise came from Statistical Inference, problem 8.2.)

Visual Studio 2010 is a pig

Visual Studio 2010 has not made a good first impression.

It took about a day to install. I was using the Visual Studio Ultimate Web Installer and much of the time was spent downloading bits. I’m sure it would have been faster had I started with a DVD.  Also, I wasn’t giving the install my full attention. I was doing my regular work on one machine while installing VS 2010 on a remote machine. I would connect to the remote machine now and then to check on the progress. I don’t know exactly how long it took, but it was the majority of a day.

When I first started Visual Studio 2010, it took about half an hour to write my first “hello world” example. When I fired up VS 2010, I spent several minutes staring at a dialog that said “Microsoft Visual Studio is loading user settings. This may take a few minutes.” Seven minutes after launching Visual Studio, the application went away and my machine rebooted. I started Visual Studio again, started a C# console application, inserted a WriteLine statement, and compiled. Total elapsed time: 27 minutes.

I closed Visual Studio and did some more work. Later I came back and opened Visual Studio to write “hello world” again. Time from starting Visual Studio to compiling: 2 minutes 50 seconds.

Now I realize that start-up time isn’t everything. Most users will start Visual Studio and keep it up for hours or days. And that’s who Visual Studio is intended to serve. It’s not meant to be something you fire up for quick jobs.

Visual Studio 2010 is huge. The installation DVD is 2.3 GB. The source code for VS 2010 contains about 1,500,000 files and takes Microsoft 61 hours to build according to Phil Haack. (He said he didn’t know how many machines the build process uses.) Phil Haack also said that the release of VS 2010 was delayed because the feedback from testers was that the product was too slow. If the released product is faster, the betas must have been intolerably slow.

Update: I installed the Express version of VS 2010 on another computer and have been using it regularly. It is much faster, and pleasant to use. Maybe there’s something about the Ultimate edition (TFS integration?) that slows it down.

Related posts:

20 why posts

Twenty “why” posts from this blog:


Here’s the best explanation of burnout I’ve seen:

… burning out isn’t just about work load, it’s about work load being greater than the motivation to do work.

The context is a former consultant saying that heavy course loads at MIT did not burn him out, but an easy job doing dishonest consulting work did.

From The story BCG offered me $16,000 not to tell.

Related post: The most subtle of the seven deadly sins

Two meanings of “argument”

The most common use of the word “argument” is to describe a disagreement.  So the first time you hear “argument” to mean something you pass into a function (either a mathematical function or a programming language function), it sounds odd. How did “argument” come to mean two very different things? Here is an explanation.

It is curious to track the path by which the word “argument” came to have two different meanings, one in mathematics and the other in everyday English. According to the Oxford English Dictionary, the word derives from the Latin for “to make clear, prove”; thus it came to mean, by one thread of derivation, “the evidence offered as proof”, which is to say, “the information offered”, which led to its meaning in Lisp. But in the other thread of derivation, it came to mean “to assert in a manner against which others may make counter assertions”, which led to the meaning of the word as a disputation.

Taken from An Introduction to Programming in Emacs Lisp.

Update: As Dave Richeson points out in the comments below, there are really three meanings of “argument” being discussed.

Eclectic mix podcast

If you’re looking for a way to discover some new music, check out Eclectic Mix. The show lives up to its name, featuring all kinds of music. For example, here’s a show with Latin Giants of Jazz and here’s one with The Monks and Choirs of Kiev Pechersk Lavra.

85% functional language purity

James Hague offers this assessment of functional programming:

My real position is this: 100% pure functional programing doesn’t work. Even 98% pure functional programming doesn’t work. But if the slider between functional purity and 1980s BASIC-style imperative messiness is kicked down a few notches — say to 85% — then it really does work. You get all the advantages of functional programming, but without the extreme mental effort and unmaintainability that increases as you get closer and closer to perfectly pure.

I found James Hague’s blog via a link from Greg Wilson. I’ve gone back through several posts on Hague’s blog Programming in the 21st Century and look forward to reading more.

Related posts:

Idea people versus results people

I liked this quote from Hugh MacLeod the other day:

Idea-Driven People come up with Ideas (and Results), more often than Results-Driven People come up with Results (and Ideas).

His quote brings up two related fallacies.

  1. People who are good at one thing must be bad at something else.
  2. People who specialize in something must be good at it.

Neither of these is necessarily true. It’s wrong to assume that because someone is good at coming up with ideas, they must be bad at implementing them. It’s also wrong to assume that someone produces results just because they call themselves results-driven.

The first fallacy comes up all the time in hiring. Job seekers may leave credentials off their résumé to keep employers from assuming that strength in one area implies weakness in another area. When I was looking for my first programming job, some companies assumed I must be a bad programmer because I had a PhD in math. One recruiter suggested I take my degree off my résumé. I didn’t do that, and fortunately I found a job with a company that needed a programmer who could do signal processing.

Andrew Gelman addressed the second fallacy in what he calls the Pinch-Hitter Syndrome:

People whose job it is to do just one thing are not always so good at that one thing.

As he explains here,

The pinch-hitter is the guy who sits on the bench and then comes up to bat, often in a key moment of a close game. When I was a kid, I always thought that pinch hitters must be the best sluggers in baseball, because all they do (well, almost all) is hit. But … pinch hitters are generally not the best hitters.

This makes sense in light of the economic principle of comparative advantage. You shouldn’t necessarily do something just because you’re good at it. You might be able to do something else more valuable. When people in some area don’t do their job particularly well, it may be because those who can to the job better have moved on to something else.

Related post: Self-sufficiency is the road to poverty

Best management decision

In his book The Design of Design, Frederick Brooks describes his most productive decision as a manager at IBM.

My most productive single act as an IBM manager had nothing to do with product development. It was sending a promising engineer to go as a full-time IBM employee in mid-career to the University of Michigan to get a PhD. This action … had a payoff for IBM beyond my wildest dreams.

That engineer was E. F. Codd, father of relational databases.

Related post: Many hands make more work