Orthogonal polynomials

This morning I posted some notes on orthogonal polynomials and Gaussian quadrature.

“Orthogonal” just means perpendicular. So how can two polynomials be perpendicular to each other? In geometry, two vectors are perpendicular if and only if their dot product of their coordinates is zero. In more general settings, two things are said to be orthogonal if their inner product (generalization of dot product) is zero. So what was a theorem in basic geometry is taken as a definition in other settings. Typically mathematicians say “orthogonal” rather than “perpendicular.” The basic idea of lines meeting at right angles acts as a reliable guide to intuition in more general settings.

Two polynomials are orthogonal if their inner product is zero. You can define an inner product for two functions by integrating their product, sometimes with a weighting function.

Orthogonal polynomials have remarkable properties that are easy to prove. Last week I posted some notes on Chebyshev polynomials. The notes posted today include Chebyshev polynomials as a special case and focus on the application of orthogonal polynomials to quadrature. (“Quadrature” is just an old-fashioned word for integration, usually applied to numerical integration in one dimension.) It turns out that every class of orthogonal polynomials corresponds to an integration rule.

Read More

Honeybee genealogy

Male honeybees are born from unfertilized eggs. Female honeybees are born from fertilized eggs. Therefore males have only a mother, but females have both a mother and a father.

Take a male honeybee and graph his ancestors. Let B(n) be the number of bees at the nth level of the family tree. At the first level of the tree is our male honeybee all by himself, so B(1) = 1. At the next level of our tree is his mother, all by herself, so B(2) = 1.

Pick one of the bees at level n of the tree. If this bee is male, he has a mother at level n+1, and a grandmother and grandfather at level n+2. If this bee is female, she has a mother and father at level n+1, and one grandfather and two grandmothers at level n+2. In either case, the number of grandparents is one more than the number of parents. Therefore B(n) + B(n+1) = B(n+2).

To summarize, B(1) = B(2) = 1, and B(n) + B(n+1) = B(n+2). These are the initial conditions and recurrence relation that define the Fibonacci numbers. Therefore the number of bees at level n of the tree equals F(n), the nth Fibonacci number.

This is a more realistic demonstration of Fibonacci numbers in nature than the oft-repeated rabbit problem.

Read More

Enterprising software

Cyndi Mitchell in a talk from Rails Conf points out how “enterprise” in the phrase “enterprise software” has taken on the opposite of its customary meaning.

If you call a person enterprising, you have in mind someone who takes risks and accomplishes things.  And “Enterprise” has been the name numerous ships, real and fictional, based on the bold, adventurous overtones of the name. But Cyndi Mitchell says when she thinks about enterprise software, the first words that come to mind are bloatware, incompetence, and corruption. I wouldn’t go quite that far, but words like “bureaucratic” and “rigid” would certainly be on my list. In any case, “enterprise” has a completely different connotation in “enterprise software” than in “USS Enterprise.”

The USS Enterprise circa 1890 at the New York Navy Yard

Related posts:

Organizational scar tissue
Parkinson’s law
Stupidity scales

Read More

Introduction to Mac for Windows developers

Here are a couple podcasts introducing Windows developers to software development on the Macintosh.

Scott Hanselman: What’s it like for Mac Developers, an nterview with Steven Frank

.NET Rocks: Miguel de Icaza and Geoff Norton on Mono, mostly about .NET development on the Mac

Also, there are a lot of Mac-related talks on the GeekCruise podcast. The talks from January 2007 were directed at a general audience new to the Mac.

Hanselman’s podcast talks about some of the cultural difference between Microsoft and Apple customers. For example, Mac users update their OS more often and complain less about OS changes that break software.

Read More

How to avoid being outsourced or open sourced

Kevin Kelly has a post entitled Better than Free that lists eight things people will pay a premium for, even while closely related things are free or cheap:

  • Immediacy
  • Personalization
  • Interpretation
  • Authenticity
  • Accessibility
  • Embodiment
  • Patronage
  • Findability

Daniel Pink has a related list in his book A Whole New Mind. (Here’s an interview with Pink that gives an overview of his book.) Pink says the skills that will be increasingly valued over time, and difficult to outsource, are:

  • Design
  • Story
  • Symphony
  • Empathy
  • Play
  • Meaning

In The World Is Flat, Thomas Friedman says four kinds of people are “untouchable,” that is, immune to losing their job due to outsourcing. These are people who are

  • Special
  • Specialized
  • Anchored
  • Really adaptable

In Friedman’s terminology, “special” means world-class talent, someone like Michael Jordan or Yo-Yo Ma. Anchored means geographically anchored, like a barber. For most of us, our best options are to be specialized or really adaptable.

How do these three lists fit together? You could see Kelly’s and Pink’s lists as ways to specialize and adapt your product or service per Friedman’s advice.

  • Meet your customer’s emotional needs (design, authenticity, patronage, empathy).
  • Make things convenient (immediacy, accessibility, findability).
  • Bring the pieces together, both literally (personalization, symphony) and figuratively (interpretation, story, meaning).
  • Be human (embodiment, play).

Read More

Merry-go-round water pump

I ran across this on Guy Kawasaki’s blog, what he calls “the cleverest idea I’ve seen in years.” It’s a water pump for developing areas that works by having children play on in. Here’s a video from National Geographic demonstrating the pump.

Read More

Chebyshev polynomials

I posted a four-page set of notes on Chebyshev polynomials on my web site this morning. These polynomials have many elegant properties that are simple to prove. They’re also useful in applications.

Mr. Chebyshev may have the honor of the most variant spellings for a mathematician’s name. I believe “Chebyshev” is now standard, but his name has been transliterated from the Russian as Chebychev, Chebyshov, Tchebycheff, Tschebyscheff, etc. His polynomials are denoted Tn(x) based on his initial in one of the older transliterations.

Read More

False positives for medical papers

My previous two posts have been about false research conclusions and false positives in medical tests. The two are closely related.

With medical testing, the prevalence of the disease in the population at large matters greatly when deciding how much credibility to give a positive test result. Clinical studies are similar. The proportion of potential genuine improvements in the class of treatments being tested is an important factor in deciding how credible a conclusion is.

In medical tests and clinical studies,  we’re often given the opposite of what we want to know. We’re given the probability of the evidence given the conclusion, but we want to know the probability of the conclusion given the evidence. These two probabilities may be similar, or they may be very different.

The analogy between false positives in medical testing and false positives in clinical studies is helpful, because the former is easier to understand that the latter. But the problem of false conclusions in clinical studies is more complicated. For one thing, there is no publication bias in medical tests: patients get the results, whether positive or negative. In research, negative results are usually not published.

Read More

False positives for medical tests

The most commonly given example of Bayes theorem is testing for rare diseases. The results are not intuitive. If a disease is rare, then your probability of having the disease given a positive test result remains low. For example, suppose a disease effects 0.1% of the population and a test for the disease is 95% accurate. Then your probability of having the disease given that you test positive is only about 2%.

Textbooks typically rush through the medical testing example, though I believe it takes a more details and numeric examples for it to sink in. I know I didn’t really get it the first couple times I saw it presented.

I just posted an article that goes over the medical testing example slowly and in detail: Canonical example of Bayes’ theorem in detail. I take what may be rushed through in half a page of a textbook and expand it to six pages, and I use more numbers and graphs than equations. It’s worth going over this example slowly because once you understand it, you’re well on your way to understanding Bayes’ theorem.

Read More

Most published research results are false

John Ioannidis wrote an article in Chance magazine a couple years ago with the provocative title Why Most Published Research Findings are False.  [Update: Here's a link to the PLoS article reprinted by Chance. And here are some notes on the details of the paper.] Are published results really that bad? If so, what’s going wrong?

Whether “most” published results are false depends on context, but a large percentage of published results are indeed false. Ioannidis published a report in JAMA looking at some of the most highly-cited studies from the most prestigious journals. Of the studies he considered, 32% were found to have either incorrect or exaggerated results. Of those studies with a 0.05 p-value, 74% were incorrect.

The underlying causes of the high false-positive rate are subtle, but one problem is the pervasive use of p-values as measures of evidence.

Folklore has it that a “p-value” is the probability that a study’s conclusion is wrong, and so a 0.05 p-value would mean the researcher should be 95 percent sure that the results are correct. In this case, folklore is absolutely wrong. And yet most journals accept a p-value of 0.05 or smaller as sufficient evidence.

Here’s an example that shows how p-values can be misleading. Suppose you have 1,000 totally ineffective drugs to test. About 1 out of every 20 trials will produce a p-value of 0.05 or smaller by chance, so about 50 trials out of the 1,000 will have a “significant” result, and only those studies will publish their results. The error rate in the lab was indeed 5%, but the error rate in the literature coming out of the lab is 100 percent!

The example above is exaggerated, but look at the JAMA study results again. In a sample of real medical experiments, 32% of those with “significant” results were wrong. And among those that just barely showed significance, 74% were wrong.

See Jim Berger’s criticisms of p-values for more technical depth.

Read More

Proofs of false statements

Mark Dominus brought up an interesting question last month: have there been major screw-ups in mathematics? He defines a “major screw-up” to be a flawed proof of an incorrect statement that was accepted for a significant period of time. He excludes the case of incorrect proofs of statements that were nevertheless true.

It’s remarkable that he can even ask the question. Can you imagine someone asking with a straight face whether there have ever been major screw-ups in, say, software development? And yet it takes some hard thought to come up with examples of really big blunders in math.

No doubt there are plenty of flawed proofs of false statements in areas too obscure for anyone to care about. But in mainstream areas of math, blunders are usually uncovered very quickly. And there are examples of theorems that were essentially correct but neglected some edge case. Proofs of statements that are just plain wrong are hard to think of. But Mark Dominus came up with a few.

Yesterday he gave an example of a statement by Kurt Gödel that was flat-out wrong but accepted for over 30 years. Warning: reader discretion advised. His post is not suitable for those who get queasy at the sight of symbolic logic.

Read More

A bigger clipboard

Imagine you find a paragraph on the web that you want to email to a friend. You copy the paragraph. Then you think you should send a link to full article, so you copy that too. You start composing your email and you type Ctrl-V to paste in the paragraph, but to your disappointment you just paste the link. So you go back and copy the paragraph again.

The problem is that the Windows clipboard only holds the most recent thing you copied. Jeff Atwood posted an article a few days ago called Reinventing the Clipboard where he recommends a utility called ClipX that extends the clipboard. After you install ClipX, typing Ctrl-Shift-V brings up a little menu of recent clippings available for pasting.

I’ve been using ClipX for a few days now. It’s simple and unobtrusive. The only slight challenge at first is remembering that it’s available. One you think to yourself once or twice, “Oh wait, I don’t have to go back and copy that again,” you’re hooked.

Read More

Footnote to interruption post

In my post yesterday about interruptions I quoted Mary Czerwinski from Microsoft Research. She told me afterward that two of the applications mentioned in the interview have been released. They are publically available from the Microsoft Research download site.

I haven’t had a chance to use either of these tools yet. If you try them out, let me know what you think.

Read More