Replacing Google Reader

Google Reader is going away on Monday. ReplaceReader.com lists many alternatives, sorted by popularity.

Another RSS reader I didn’t see on their list that looks promising is Yoleo.

Also, Digg Reader is supposed to be released soon.

Update: Digg Reader is out. You can read more on their blog. I’ve only used it for a few minutes, but I think it may be my favorite RSS reader.

Update: Looks like ReplaceReader.com is dead. I’m currently using Inoreader.

Update: Digg Reader is dead. But Inoreader still works!

Travel

This week I spent two days in Los Angeles visiting a client and two days in Austin for the SciPy conference.

The project in LA is very interesting, but not something I’m free to talk about. The conference was fun, especially because I got to see in person several people that I have only talked to online. Both trips stirred up a lot ideas that are going to take a while to process.

I may be going to Seattle before too long, but other than that I’m done traveling for now.

Clever visual proof

There are as many yellow dots above the bottom row of the triangle as there are pairs of purple dots on the bottom row. To see this, note that every yellow dot determines a pair of purple dots by projecting it down to the left and to the right. Conversely, you can go up from any pair of purple dots up to a yellow dot.

Via sark

Related post: Tetrahedral numbers

Statistical evidence versus legal evidence

This is the job of a juror in the US legal system described in statistical terms:

Compute the posterior probability of a defendant’s guilt conditioned on the admissible evidence, starting with a prior biased toward innocence. Report “guilty” if the posterior mean probability of guilt is above a level referred to as “beyond reasonable doubt.”

A juror is not to compute a probability conditioned on all evidence, only admissible evidence. One of the purposes of voir dire is to identify potential jurors who do not understand this concept and strike them from the jury pool. Very few jurors would understand or use the language of conditional probability, but a competent juror must understand that some facts are not to be taken into consideration in reaching a verdict.

For example, the fact that someone has been arrested, indicted by a grand jury, and brought to trial is not itself to be considered evidence of guilt. It is not legal evidence, but it certainly is statistical evidence: People on trial are more likely to be guilty of a crime than people who are not on trial.

This sort of schizophrenia is entirely proper. Statistical tendencies apply to populations, but trials are about individuals. The goal of a trial is to make a correct decision in an individual case, not to make correct decisions on average.[1]  Also, the American legal system embodies the belief that false positives are much worse than false negatives. [2]

Thinking of a verdict as a conditional probability allows a juror to simultaneously believe personally that someone is probably guilty while remaining undecided for legal purposes.

Related: A statistical problem with “nothing to hide”

Footnotes:

[1] Jury instructions are implicitly Bayesian rather than frequentist in the sense that jurors are asked to come up with a degree of belief. They are not asked to imagine an infinite sequence of similar trials etc.

[2] For example, Benjamin Franklin said  “That it is better 100 guilty Persons should escape than that one innocent Person should suffer, is a Maxim that has been long and generally approved.” In decision theory vocabulary, this is a highly asymmetric loss function.

How to convert frequency to pitch

I saw somewhere that James Earl Jones’ speaking voice is around 85 Hz. What musical pitch is that?

Let P be the frequency of some pitch you’re interested in and let C = 261.626 be the frequency of middle C. If h is the number of half steps from C to P then

P / C = 2h/12.

Taking logs,

h = 12 log(P / C) / log 2.

If P = 85, then h = -19.46. That is, James Earl Jones’ voice is about 19 half-steps below middle C, around the F an octave and a half below middle C.

More details on the derivation above here.

There’s a page to do this calculation for you here. You can type in frequency or pitch and get the other back.

(The page also gives pitch on a Bark scale, something you may not care about that is important in psychoacoustics.)

 

Photo credit Wikipedia
Music image created using Lilypod.

Leading digits and quadmath

My previous post looked at a problem that requires repeatedly finding the first digit of kn where k is a single digit but n may be on the order of millions or billions.

The most direct approach would be to first compute kn as a very large integer, then find it’s first digit. That approach is slow, and gets slower as n increases. A faster way is to look at the fractional part of log kn = n log k and see which digit it corresponds to.

If n is not terribly big, this can be done in ordinary precision. But when n is large, multiplying log k by n and taking the fractional part brings less significant digits into significance. So for very large n, you need extra precision. I first did this in Python using SymPy, then switched to C++ for more speed. There I used the quadmath library for gcc. (If n is big enough, even quadruple precision isn’t enough. An advantage to SymPy over quadmath is that the former has arbitrary precision. You could, for example, set the precision to be 10 more than the number of decimal places in n in order to retain 10 significant figures in the fractional part of n log k.)

The quadmath.h header file needs to be wrapped in an extern C declaration. Otherwise gcc will give you misleading error messages.

The 128-bit floating point type __float128 has twice as many bits as a double. The quadmath functions have the same name as their standard math.h counterparts, but with a q added on the end, such as log10q and fmodq below.

Here’s code for computing the leading digit of kn that illustrates using quadmath.

#include <cmath >
extern "C" {
#include <quadmath.h>
}

__float128 logs[11];

for (int i = 2; i  <= 10; i++)
    logs[i] = log10q(i + 0.0);

int first_digit(int base, long long exponent)
{
    __float128 t = fmodq(exponent*logs[base], 1.0);
    for (int i = 2; i  <= 10; i++)
        if (t  < logs[i])
            return i-1;
}

The code always returns because t is less than 1.

Caching the values of log10q saves repeated calls to a relatively expensive function. So does using the search at the bottom rather than computing powq(10, t).

The linear search at the end is more efficient than it may seem. First, it’s only search a list of length 9. Second, because of Benford’s law, the leading digits are searched in order of decreasing frequency, i.e. most inputs will cause first_digit to return early in the search.

When you compile code using quadmath, be sure to add -lquadmath to the compile command.

Related posts

Gelfand’s question

Gelfands’s question asks whether there is a positive integer n such that the first digits of jn base 10 are all the same for j = 2, 3, 4, …, 9. (Thanks to @republicofmath for pointing out this problem.) This post will explore Gelfand’s question via probability.

The MathWorld article on Gelfand’s question says that the answer is no for values of n less than 100,000. That range seemed small to me. My intuition was that you’d need to try larger values of n to have a reasonable chance of finding a solution.

So how big a value of n should you explore? To answer that question, consider picking n at random. If the leading digits of jn were randomly distributed, what would the chances be that n would satisfy Gelfand’s question? Leading digits are not random, but they do often have regular distributions.

Suppose the leading digits are uniformly distributed among 1 to 9, and that the leading digits for each base are independent. Then the probability of jn beginning with 1 (or any other chosen digit) for all j from 2 to 9 would be (1/9)8, and the probability of all leading digits matching any digit would be 9 times larger. That would say the probability of all leading digits matching would be on the order of 2 × 10-7, so we should try on the order of 107 or 108 values of n to have a decent chance of finding one.

But there’s a problem with the above argument: leading digits are not uniformly distributed. Benford’s law says that leading digits are often distributed very unevenly. When a system follows Benford’s law, the probability of a leading digit being d is log10(1 + 1/d). An uneven distribution of leading digits raises the probability that all the digits will match. If the leading digits of jn follow Benford’s law, and are independent, then the probability of all matching is 6.84 × 10-5, two orders of magnitude higher than assuming a uniform distribution.

Do the leading digits of jn follow Benford’s law? Yes, at least approximately, based on testing values of n from 1 to 1,000.

Now suppose the probability of a randomly selected value of n satisfying Gelfand’s question is p = 6.84 × 10-5. If we tried 100,000 random values of n, the probability of having no successes would be about 0.00107. So my intuition was wrong. If the random heuristic given here were correct, we’d stand a good chance of seeing a solution if we tried values of n up to 100,000.

Where does the probabilistic heuristic go wrong? In the assumption that the distributions of the leading digits are independent.

Let xj be the sequence jn for n running from 1 to 1000. Some of the xj are correlated and some are not.

Some of the correlations are easy to see. For example, the sequences for x2 and x4 have correlation 0.493. That’s because 4n = (2n)2. So if you know the first digit of 2n, you can often guess the first digit of 4n. In light of this, it’s not surprising that x2 is also correlated with x8, and x3 is correlated with x9.

You might speculate that xj and xk are uncorrelated if and only if j and k are relatively prime. Some sequences, such as x2 and x3 support that guess. But x4 and x5 are negatively correlated, r = -0.433, for reasons that are not obvious. I suspect that the negative correlations are the key to understanding the question.

* * *

Update: I have verified that the answer to Gelfand’s question is no for n up to 1010.

Update: The answer to Gelfand’s question is no for all n according to Theorem 1 of the following paper: A Simple Answer to Gelfand’s Question by Jaap Eising, David Radcliffe, and Jaap Top, American Mathematical Monthly, March 2015.

Related: Applied number theory

What have you been doing?

Several people have asked me what kind of work I’ve been doing since I went out on my own earlier this year. So far I’ve done a lot of fairly small projects, though I have one large project that’s just getting started. (The larger the project and client, the longer it takes to get rolling.)

Here are some particular things I’ve been doing.

  • Helped a company improve their statistical software development process
  • Modeled the efficiency and reliability of server configurations
  • Analyzed marketing and sales data
  • Coached someone in technical professional development
  • Wrote an article for an online magazine
  • Helped a company integrate R into their software product
  • Reviewed the mathematical code in a video game
  • Researched and coded up some numerical algorithms

If something on this list sounds like something you’d like for me to do with your company, please let me know.

Continuous quantum

David Tong argues that quantum mechanics is ultimately continuous, not discrete.

In other words, integers are not inputs of the theory, as Bohr thought. They are outputs. The integers are an example of what physicists call an emergent quantity. In this view, the term “quantum mechanics” is a misnomer. Deep down, the theory is not quantum. In systems such as the hydrogen atom, the processes described by the theory mold discreteness from underlying continuity. … The building blocks of our theories are not particles but fields: continuous, fluid-like objects spread throughout space. … The objects we call fundamental particles are not fundamental. Instead they are ripples of continuous fields.

Source: The Unquantum Quantum, Scientific American, December 2012.