When does the sum of three numbers equal their product?

Mathematics Diary posted the following identity this morning. If a + b + c = π then

tan(a) + tan(b) + tan(c) = tan(a) tan(b) tan(c).

I’d never seen that before. It’s striking that the sum of three numbers would also equal their product. In fact, the only way for the product of three numbers to be equal to their sum is for the three numbers to be tangents of angles that add up to π radians. I’ll explain below why that’s true.

First, we can generalize the identity above slightly by saying it holds if a + b + c is a multiple of π. In fact, it can be shown that

tan(a) + tan(b) + tan(c) = tan(a) tan(b) tan(c)

if and only if a + b + c is a multiple of π. (Here’s a sketch of a proof.)

Now suppose x + y + z = x y z. We can find numbers a, b, and c such that x = tan(a), y = tan(b), and z = tan(c) and it follows that a + b + c must be a multiple of π. But can we chose a, b, and c so that their sum is not just a multiple of π but exactly π? Yes. If a + b + c equaled kπ for some integer k, pick a new value of c equal to the original c minus (k-1)π. This leaves the value of tan(c) unchanged since the tangent function has period π.

Spell checking from Python

I needed to find a spell checker I could call from Python, so I did a Google search and ran across GNU aspell. I tried installing it but got contradictory warning messages: aspell not installed, aspell already installed, etc. Then I remembered what an awful time I’d had before when I’d tried to use aspell and gave up.

Next I tried Ryan Kelly’s PyEnchant and it worked like a charm. I downloaded the installer for Windows and ran it. Then I opened up a Python console and typed an example following the online tutorial.

>>> import enchant
>>> d = enchant.Dict("en_US")
>>> d.check("Potatoe")
False
>>> d.check("Potato")
True

It just works.

Web 2.0 over dial-up

I’m borrowing an old Pentium III computer with a dial-up Internet connection. I haven’t used dial up in a long time and was surprised what a difference bandwidth makes. Many “Web 2.0″ sites are just painful to use. Some simply do not work. One site gave me a message essentially saying to go away and come back with a better connection.

However, StackOverflow was a pleasant surprise. I thought that since it uses a lot of client-side JavaScript, the site would be just as sluggish as the others I tried. It takes a while to log in, but after that the site is easy to use over a slow connection. The developers of the site must have thought about conserving bandwidth; I find it hard to believe that the excellent low-bandwidth performance was an accident. Very impressive.

Distribution of adult heights

It is well known that adult male heights follow a normal (Gaussian) distribution. The same is true of adult female heights. But what does the distribution of heights look like for adults in general? You might be surprised.

Assume heights for women follow a normal distribution with mean of 64 inches and standard deviation 3 inches.

Assume men’s heights follow the same distribution but with an average of 70 inches.

Finally, assume men and women each make up 50% of the population. Then you get the following distribution for the heights of adults in general.

The mixture is surprisingly flat on top. Minor variations on the assumptions above can change the shape, making it more rounded at the top, making it dip in the middle, or making it tip to one side.

See Adult heights and mixture distributions for mathematical details.

See also Why heights are normally distributed.

How to put PDF properties in a LaTeX file

My previous post described how to put links in a PDF file generated from LaTeX. The hyperref package that lets you include links also lets you to set PDF document properties. I’ve been using Adobe Acrobat to do this after creating my PDF file with pdflatex, but that’s unnecessary. Here’s how to put the PDF properties directly in the LaTeX file. Add something like this

\hypersetup
{
    pdfauthor={John Hancock},
    pdfsubject={Some subject},
    pdftitle={Sample document},
    pdfkeywords={LaTeX, PDF, hyperlinks}
}

after the \usepackage{hyperref} instruction at the top of your file.

How to link to web pages from LaTeX-generated PDF

This has been on my to-do list for a while, but I finally found out how to embed hyperlinks in a PDF file generated from LaTeX.

Short answer: put \usepackage{hyperref} in your header, and when you want to link to a page, use the command \href{URL}{anchor text}. For example,

\documentclass{article}
\usepackage{hyperref}
\begin{document}

Here's a link to \href{http://twitter.com/home}{Twitter}.

\end{document}

For much more detail on links in LaTeX documents, see Patrick Jöckel’s LaTeX-PDF page and the hyperref package documentation.

Probability that a study result is true

Suppose a new study comes out saying a drug or a food or a habit lowers your risk of some disease. What is the probability that the study’s result is correct? Obviously this is a very important question, but one that is not raised often enough.

I’ve referred to a paper by John Ioannidis (*) several times before, but I haven’t gone over the model he uses to support his claim that most study results are false. This post will look at some equations he derives for estimating the probability that a claimed positive result is correct.

First of all, let R be the ratio of positive findings to negative findings being investigated in a particular area. Of course we never know exactly what R is, but let’s pretend that somehow we knew that out of 1000 hypotheses being investigated in some area, 200 are correct. Then R would be 200/800 = 0.25. The value of R varies quite a bit, being relatively large in some fields of study and quite small in others. Imagine researchers pulling hypotheses to investigate from a hat. The probability of selecting a hypothesis that really is true would be R/(R+1) and the probability selecting a false hypothesis is 1/(R+1).

Let α be the probability of incorrectly declaring a false hypothesis to be true. Studies are often designed with the goal that α would be 0.05. Let β be the probability that a study would incorrectly conclude that that a true hypothesis is false. In practice, β is far more variable than α. You might find study designs with β anywhere from 0.5 down to 0.01. The design choice β = 0.20 is common in some contexts.

There are two ways to publish a study claiming a new result: you could have selected a true hypothesis and correctly concluded that it was true, or you could have selected a false but incorrectly concluded it was true. The former has probability (1-β)R/(R+1) and the latter has probability α/(R+1). The total probability of concluding a hypothesis is true, correctly or incorrectly, is the sum of these probabilities, i.e. ((1-β)R + α)/(R+1). The probability that a study conclusion is true given that you concluded it was true, the positive predictive value or PPV, is the ratio of (1-β)R/(R+1) to ((1-β)R + α)/(R+1). In summary, under the assumptions above, the probability of a claimed result being true is (1-β)R/((1-β)R + α).

If (1 – β)R < α then the model say that a claim is more likely to be false than true. This can happen if R is small, i.e. there are not a large proportion of true results under investigation, and if β is large, i.e. if studies are small. If R is smaller than α, most studies will be false no matter how small you make β, i.e. no matter how large the study. This says that in a challenging area, where few of the ideas being investigated lead to progress, there will be a large proportion of false results published, even if the individual researchers are honest and careful.

Ioannidis develops two other models refining the model above. Suppose that because of bias, some proportion of results that would otherwise have been reported as negative are reported as positive. Call this proportion u. The derivation of the positive predictive value is similar to that in the previous model, but messier. The final result is R(1-β + uβ)/(R(1-β + uβ) + α + u – αu). If 1 – β > α, which is nearly always the case, then the probability of a reported result being correct decreases as bias increases.

The final model considers the impact of multiple investigators testing the same hypothesis. If more people try to prove the same thing, it’s more likely that someone will get lucky and “prove” it, whether or not the thing to be proven is true. Leaving aside bias, if n investigators are testing each hypothesis, the probability that a positive claim is true is given by R(1 – βn)/(R + 1 – (1 – α)nRβn). As n increases, the probability of a positive claim being true decreases.

The probability of a result being true is often much lower than is commonly believed. One reason is that hypothesis testing focuses on the probability of the data given a hypothesis rather than the probability of a hypothesis given the data. Calculating the probability of a hypothesis given data relies on prior probabilities, such as the factors R/(R+1) and 1/(R+1) above. These prior probabilities are elusive and controversial, but they are critical in evaluating how likely it is that claimed results are true.

(*) John P. A. Ioannidis, Why most published research findings are false. CHANCE volume 18, number 4, 2005.

I am not an operating system

I thought Apple’s I’m a Mac ad campaign was amusing, but I’m not impressed with Microsoft’s belated I’m a PC response. When Apple claimed that cool people use Macs, it was lame to respond that some PC users are cool too.

PC users don’t identify with their operating system the way Mac users do. PC users don’t put Windows logos on cars, much less on their bodies, for example. (Maybe some do, but they’re less common.)

This was inevitable, and it has little to do with Apple and Microsoft per se. In any industry, when one vendor has a 95% market share, they’re forced into a position being all things to all people and making a lot of people moderately happy, or at least not unhappy. And when another vendor has a 5% market share, they have to make that 5% very happy. The minority vendor can be hip and scrappy; the majority vendor cannot.

Maybe a better response to “I’m a Mac” would have been “I’m not an operating system,” poking fun at people who identify strongly with their computer.

What it takes to make Paint.NET easy to install

Writing good installation programs is hard. It takes experience and forethought to imagine all the things that might happen on the client’s computer. Top notch programmers know that installers are critical to a user’s experience and put a lot of time into making them high quality. Not so top notch programmers think a project is done when they can get it to work on their computer.

Scott Hanselman has an interview with Paint.NET creator Rick Brewster. The main topic of the interview was the things Rick did to make Paint.NET easy to install.

Paint.NET is an image editing product written using Microsoft’s .NET framework. In a sense Paint.NET is the geometric mean between the Paint program that ships with Windows and Adope Photoshop: it does about 20x more than Paint and about 20x less than Photoshop. It does most of the things I need, but it’s small enough that I can do a brute force search of the features if I have to.

The differences between bias, consistency, and efficiency

Sometimes code is easier to understand than abstract math. A few days ago I was having a hard time conveying bias, consistency, and efficiency in a statistics class. Writing some pseudo-code on the board seemed to help clear things up. Loops and calls to random number generation routines are more tangible than discussions about random samples.

Later I turned the pseudo-code into Python code (after all, Python is supposed to be “executable pseudo-code”) and fancied it up a bit. The following page gives some explanation, some plots of the output, and the source code.

The difference between an unbiased estimator and a consistent estimator

How many numbers are squares mod m

In a previous post, fast way to tell whether a number is a square, the question came up of how many integers are squares modulo an integer m. That is, for how many numbers y in the list 0, 1, 2, …, m-1 does the equation x2y (mod m) have a solution. The application in the earlier post was only concerned with m equal to a power of 2, but it turns out there is a completely general solution.

Michael Lugo pointed to a paper (*) that gives a solution for all values of m. I will summarize the paper here and give a Python script implementing the solution.

The paper defines the function s(n) as the number of distinct squares modulo n. The first observation is that s(n) is a multiplicative function. That means that if m and n are relatively prime, s(mn) = s(m) s(n). Since every integer can be factored into prime powers, knowing s() is multiplicative reduces the problem to computing s(pn) for primes p. The paper then shows how to compute s() for powers of odd primes and powers of 2. The full solution is summarized in the code below.

Assume we have factored our number m. For example, suppose we want to know how many possible last three digits there are for squares base 10. That is equivalent to computing s(1000), and so we factor 1000 into 23 53. We represent that factorization as a list of pairs: [ (2, 3), (5, 3) ]. The function squares below tells us that s(1000) is 159.

def isOdd( n ):
    return n%2

def s( factor ):
    (p, n) = factor
    if isOdd( p ):
        if n == 1:
            r = (p+1)/2
        elif n == 2:
            r = (p*p - p + 2)/2
        elif isOdd( n ):
            r = (p**(n+1) + 2*p + 1)/(2*p + 2)
        else:
            r = (p**(n+1) + p + 2)/(2*p + 2)
    else:
        if isOdd( n ):
            r = (2**(n-1) + 5)/3
        else:
            r = (2**(n-1) + 4)/3

return r

def squares( factorization_list ):
    product = 1
    for factor in factorization_list:
        product *= s(factor);
    return product

# test cases to exercise each branch
assert squares( [(5, 1)] ) == 3
assert squares( [(5, 2)] ) == 11
assert squares( [(5, 3)] ) == 53
assert squares( [(5, 4)] ) == 261
assert squares( [(2, 3)] ) == 3
assert squares( [(2, 4)] ) == 4
assert squares( [(2,3), (5,3)] ) == 159

The importance of this result to the original problem is that looking at the last several bits of a number can screen out no more than 5/6 of the non-squares. Looking at the last four bits, i.e. the last hexadecimal digit, screens 3/4 of the non-squares, so there’s not room to do much better.

Update: So about 1 out of every 6 integers is a square mod 2n for large n. What about mod pn for odd prime p? The corresponding ratio is 1/2.

Update (21 April 2010): Fixed a couple typos.  Added syntax coloring to the source code. See an implementation of the algorithm in J by Tracy Harms.

* W. D. Stangl, “Counting Squares in Z_n”, Mathematics Magazine, pp. 285-289, Vol. 69 No. 4 October 1996.

Favorite audio book narrators

I have a long commute to work and so I’ve listened to a lot of recorded books. Here are my two favorite narrators for audio books.

John McDonough, narrator of the Mitford series by Jan Karon.

Rob Inglis, narrator of J. R. R. Tolkien’s books The Hobbit and The Lord of the Rings trilogy.

Both gentlemen are pleasant to listen to. They both develop unique voices for a large cast of characters, making it easier to follow their novels while driving.