Product review policies

I’ve often reviewed books on this site and may review other products some day. I wanted to let readers and potential vendors know what my policies are regarding product reviews.

I don’t get paid for reviews. I review things that I find interesting and think that readers would find interesting.

I don’t do reviews with strings attached. Most publishers don’t try to attach strings. They simply ask me if I’d like a copy of their book, and that’s that. A couple publishers have tried to exert more control, and I don’t review their books.

I don’t write negative reviews because they’re not interesting. There are millions of products you won’t buy this year. Who cares about another thing not to buy? A negative review could be interesting if it were for a well-known product that many people were thinking about buying, but I haven’t been asked to review anything like that. If I find something disappointing, I don’t write a review.

Books need to be on paper. Electronic files are fine for reference and for short-form reading, but I prefer paper for long-form reading.

I’m open to reviewing hardware if it’s something I would use and something that I think my readers would be interested in. I haven’t reviewed hardware to date, but someone offered me a device that expect to review when it gets here.

Average fraction round up

Pick a large number n. Divide n by each of the positive integers up to n and round the results up to the nearest integer. On average, how far do you round up?

Or in terms of probability, what is the expected distance between a fraction n/r, where n is large and fixed and r is chosen randomly between 1 and n, and the nearest larger integer?

In symbols, the question above is asking for the approximate value of

\frac{1}{n} \sum_{r = 1}^n \left( \left\lceil \frac{n}{r} \right\rceil - \frac{n}{r}\right )

for large n, i.e. in the limit as n goes to ∞. Here ⌈x⌉ denotes the ceiling of x, the smallest integer greater than or equal to x.

Let’s plot this as a function of n and see what it looks like. Here’s the Python code.

    import matplotlib.pyplot as plt
    from numpy import ceil, arange

    def f(n):
        return sum( [ceil(n/r) - n/r for r in range(1, n)] )/n

    x = arange(1, 100)
    y = [f(n) for n in x]
    plt.plot(x, y)
    plt.show()

And here’s the result.

It appears the graph may be converging to some value, and in fact it is. Charles de la Vallée Poussin proved in 1898 that the limiting value is the Euler–Mascheroni constant γ = 0.5772…. This constant is the limiting difference between the nth harmonic number and log n, i.e.

\gamma = \lim_{n\to\infty} \left(\sum_{r = 1}^n \frac{1}{r} - \log n \right)

We can add a horizontal line to our plot to see how well the graph seems to match γ. To do this we need to import the constant euler_gamma from numpy and add the

    plt.axhline(y=euler_gamma, linestyle=":")

after the plot command. When we do, this is what we see.

It looks like the plot is converging to a value slightly less than γ. Apparently the convergence is very slow. When we go out to 10,000 the plot is closer to being centered around γ but still maybe below γ more than above.

If we evaluate our function at n = 1,000,000, we get 0.577258… while γ = 0.577215….

At n = 10,000,000 we get 0.577218…. So taking 100 times as many terms in our sum gives us one extra correct decimal place, as we’d expect of a random processes since convergence usually goes like 1/√n.

New prime number record: 50th Mersenne prime

A new record for the largest known prime was announced yesterday:

2^{77,232,917} - 1

This number has 23,249,425 digits when written in base 10.

In base 2, 2p – 1 is a sequence of p ones. For example, 31 = 25 -1  which is 11111 in binary. So in binary, the new record prime is a string of 77,232,917 ones.

77,232,917 ones

You can convert a binary number to hexadecimal (base 16) by starting at the right end and converting blocks of 4 bits into hexadecimal. For example, to convert 101101111 to hex, we break it into three blocks: 1, 0110, and 1111. These blocks convert to 1, 6, and F, and so 101101111 in binary corresponds to 16F in hex.

Now 77,232,917  = 19,308,229 * 4 + 1, so if we break our string of 77,232,917 ones into groups of four, we have a single bit followed by 19,308,229 groups of 4. This means that in hexadecimal, the new prime record is 1FFF…FFF, a 1 followed by 19,308,229 F’s.

19,308,229 Fs

The new record is the 50th Mersenne prime. A Mersenne prime is a prime number that is one less than a power of 2, i.e. of the form 2p – 1. It turns out p must be prime before 2p – 1 can be prime. In the case of the new record, 77,232,917 is prime.

It is unknown whether there are infinitely many Mersenne primes. But now we know there are at least 50 of them.

All the recent prime number records have been Mersenne primes because there is an algorithm for testing whether a number of the special form 2p – 1 is prime, the Lucas-Lehmer test.

Mersenne primes are closely related to perfect numbers. A number n is perfect if the sum of the numbers less than n that divide n equals n. For example, 28 is divisible by 1, 2, 4, 7, and 14, and 1 + 2 + 4 + 7 + 14 = 28.

Finding a new Mersenne prime means finding a new perfect number. If m is a Mersenne prime, then nm(m + 1)/2 is a perfect number. Conversely, if n is an even perfect number, n has the form m(m + 1)/2, Nobody knows whether odd perfect numbers exist. Since we don’t know whether there are infinitely many Mersenne primes, we don’t know whether there are infinitely many perfect numbers. But there are at least 50 of them.

Related posts:

The Engineer’s Nyquist frequency and the sampling theorem

The Nyquist sampling theorem says that a band-limited signal can be recovered from evenly-spaced samples. If the highest frequency component of the signal is fc then the function needs to be sampled at a frequency of at least the Nyquist frequency 2fc. Or to put it another way, the spacing between samples needs to be no more than than Δ = 1/2fc.

If the signal is given by a function h(t), then the Nyquist-Shannon sampling theorem says we can recover h(t) by

h(t) = \sum_{n=-\infty}^\infty h(n\Delta)\, \mathrm{sinc}(2f_c t- n)

where sinc(x) = sin(πx) / πx.

In practice, signals may not entirely band-limited, but beyond some frequency fc higher frequencies can be ignored. This means that the cutoff frequency fc is somewhat fuzzy. As we demonstrate below, it’s much better to err on the side of making the cutoff frequency higher than necessary. Sampling at a little less than the necessary frequency can cause the reconstructed signal to be a poor approximation of the original. That is, the sampling theorem is robust to over-sampling but not to under-sampling. There’s no harm from sampling more frequently than necessary. (No harm as far as the accuracy of the equation above. There may be economic costs, for example, that come from using an unnecessarily high sampling rate.)

Let’s look at the function h(t) = cos(18πt) + cos(20πt). The bandwidth of this function is 10 Hz, and so the sampling theorem requires that we sample our function at 20 Hz. If we sample at 20.4 Hz, 2% higher than necessary, the reconstruction lines up with the original function so well that the plots of the two functions agree to the thickness of the plotting line.

Function and reconstruction sampling at 20.4 Hz

But if we sample at 19.6 Hz, 2% less than necessary, the reconstruction is not at all accurate due to problems with aliasing.

Function and reconstruction sampling at 19.6 Hz

One rule of thumb is to use the Engineer’s Nyquist frequency of 2.5 fc which is 25% more than the exact Nyquist frequency. An engineer’s Nyquist frequency is sorta like a baker’s dozen, a conventional safety margin added to a well-known quantity.

Update: Here’s a plot of the error, the RMS difference between the signal and its reconstruction, as a function of sampling frequency.

RMS error in reconstruction as a function of sampling frequency

By the way, the function in the example demonstrates beats. The sum of a 9 Hz signal and a 10 Hz signal is a 9.5 Hz signal modulated at 0.5 Hz. More details on beats in this post on AM radio and musical instruments.

Making sense of a probability problem in the WSJ

Wall Street Journal clipping

Someone wrote to me the other day asking if I could explain a probability example from the Wall Street Journal. (“Proving Investment Success Takes Time,” Spencer Jakab, November 25, 2017.)

Victor Haghani … and two colleagues told several hundred acquaintances who worked in finance that they would flip two coins, one that was normal and the other that was weighted so it came up heads 60% of the time. They asked the people how many flips it would take them to figure out, with a 95% confidence level, which one was the 60% coin. Told to give a “quick guess,” nearly a third said fewer than 10 flips, while the median response was 40. The correct answer is 143.

The anecdote is correct in spirit: it takes longer to discover the better of two options than most people suppose. But it’s jarring to read the answer is precisely 143 when the question hasn’t been stated clearly.

How many flips would it take to figure out which coin is better with a 95% confidence level? For starters, the answer would have to be a distribution, not a single number. You might quickly come to the right conclusion. You might quickly come to the wrong conclusion. You might flip coins for a long time and never come to a conclusion. Maybe there is a way a formulating the problem so that so that the expected value of the distribution is 143.

How are you to go about flipping the coins? Do you flip both of them, or just flip one coin? For example, you might flip both coins until you are confident that one is better, and conclude that the better one is the one that was designed to come up heads 60% of the time. Or you could just flip one of them and test the hypothesis Prob(heads) = 0.5 versus the alternative Prob(heads) = 0.6. Or maybe you flip one coin two times for every one time you flip the other. Etc.

What do you mean by “95% confidence level”? Is this a frequentist confidence interval? And do you compute the (Bayesian) predictive probability of arriving at such a confidence level? Are you computing the (Bayesian) posterior model probabilities of two models, one in which the first coin has probability of heads 0.5 and the second has probability 0.6 versus the opposite model?

Do you assume that you know a priori that one coin has probability of heads 0.5 and the other 0.6, or do you not assume this and just want to find the coin with higher probability of heads, and evaluate such a model when in fact the probabilities of heads are as stated?

Are you conducting an experiment with a predetermined sample size of 143? Or are you continuous monitoring the data, stopping when you reach your conclusion?

I leave it as an exercise to the reader to implement the various alternatives suggested above and see whether one of them produces 143 as a result. (I did a a back-of-the-envelope calculation that suggests there is one.) So the first question is to reverse engineer which problem statement the article was based on. The second question is to decide which problem formulation you believe would be most appropriate in the context of the article.

Exponential sum for the new year

Exponential sums can make intricate patterns. Last year I made a page that displays a different page each day, using the month, day, and year as parameters in the expression below. The images plot the partial sums of this sum.

\sum_{n=0}^N \exp\left( 2\pi i \left( \frac{n}{m} + \frac{n^2}{d} + \frac{n^3}{y} \right ) \right )

This was yesterday’s image.

Image of the day, New Year's Eve.

Today’s image is surprisingly plain if we use y = 18.

This is in part because the least common multiple of 1, 1, and 18 is 18. The image could have no more than 18 vertices. In fact, it has only 6 vertices because the summand above has period 6.

But if we use y = 2018 we get something much more complex.

The Exponential Sum of the Day page will use y = 18 this year. There will be a few simple images this year but there will also be lots of surprises.

Free technical books, mostly chemical engineering

Retiring professor Leonard Fabiano contacted me looking to give away a set of technical books, mostly chemical engineering books. If you’re interested please email him at lenfab@live.com.

Here are the books:

Chemical engineering books

Click on the image to see a larger version.

Two titles are not possible to read in the photo. These are

  • Conduction of heat in solids by Carslaw and Jaeger
  • Molecular Thermodynamics of Fluid Phase Equilibria by Prausnitz

Equation for the Eiffel Tower

Robert Banks’s book Towing Icebergs, Falling Dominoes, and Other Adventures in Applied Mathematics describes the Eiffel Tower’s shape as approximately the logarithmic curve

y = -y_* \log\left(\frac{x}{x_0} \right )

where y* and x0 are chosen to match the tower’s dimensions.

Here’s a plot of the curve:

Eiffel tower curve plot

And here’s the code that produced the plot:

from numpy import log, exp, linspace, vectorize
import matplotlib.pyplot as plt

# Taken from "Towing Icebergs, Falling Dominoes,
# and Other Adventures in Applied Mathematics"
# by Robert B. Banks

# Constants given in Banks in feet. Convert to meters.
feet_to_meter = 0.0254*12
ystar  = 201*feet_to_meter
x0     = 207*feet_to_meter
height = 984*feet_to_meter

# Solve for where to cut off curve to match height of the tower.
# - ystar log xmin/x0 = height
xmin = x0 * exp(-height/ystar)

def f(x):
    if -xmin < x < xmin:
        return height
    else:
        return -ystar*log(abs(x/x0))
curve = vectorize(f)
    
x = linspace(-x0, x0, 400)

plt.plot(x, curve(x))
plt.xlim(-2*x0, 2*x0)
plt.xlabel("Meters")
plt.ylabel("Meters")
plt.title("Eiffel Tower")

plt.axes().set_aspect(1)
plt.savefig("eiffel_tower.svg")

Related post: When length equals area
The St. Louis arch is approximately a catenary, i.e. a hyperbolic cosine.

Hermite polynomials, expected values, and integration

In the previous post, I alluded to using Hermite polynomials in conjunction with higher-order Laplace approximation. In this post I’ll expand on what that means.

Hermite polynomials are orthogonal polynomials over the real line with respect to the weight given by the standard normal distribution. (There are two conventions for defining Hermite polynomials, what Wikipedia calls the physicist convention and the probabilist convention. We’re using the latter here.)

The first few Hermite polynomials are 1, x, x2 – 1, x3 – 3x, … . You can find the rest of the Hermite polynomials via the recurrence relation

Hn+1(x) = x Hnn Hn-1(x).

The odd order Hermite polynomials are odd functions, and the standard normal density is an even function, so the integral of their product is zero. For an even number m, the integral of the mth order Hermite polynomial times the standard normal density is (m-1)!!, i.e. m double factorial. In probability terms, if X is a standard normal random variable, the expected value of Hk(X) is 0 if k is odd and (k – 1)!! otherwise.

You could think of the Hermite polynomials as the right basis to use when working with normal probability distributions. Writing a polynomial as a linear combination of Hermite polyn0mials is a change of basis that makes integration easy:

\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty \left(\sum_{k=0}^N a_k\,H_k(x)\right) \exp(-x^2/2) \,dx = \sum_{k=0}^N a_k \, [k\, \mathrm{ even} ] \,k!!

Here [k even] is the function that returns 1 if k is even and 0 otherwise. This notation was introduced by Kenneth Iverson in the APL programming language and has become moderately common in mathematics.

 

Higher-order Laplace approximation

Yesterday’s post presented the most common form of Laplace approximation, the second order version, and mentioned in passing that there are higher order versions. The extension to higher order is not trivial, so post gives a high level overview of how you’d do it.

The previous post looked at integrating exp( log( g(x) ) ) by expanding log g(x) in a power series centered at its maximum. Without loss of generality we can assume the maximum occurs at 0; otherwise do a change of variables first to make this happen.

The first order term of the series is missing because the derivative of g is zero at its maximum. Taking the terms of the series up to second order gives us something of the form a – bx² where b > 0. Then

exp(abx²) = exp(a) exp(-bx²).

The first term on the right is a constant that cam be pulled out of the integral, and the second term is a normal distribution density, modulo vertical scaling.

Now suppose we want to use more terms from the power series for log g. Let’s write out the series as

g(x) = abx² + R(x)

where R(x) is the remainder of the series after the quadratic terms. When we exponentiate the series for log g we get

exp(log g(x)) = exp(a) exp(-bx²) exp(R(x)).

We think of this as the expected value of exp(R(X)) where X is a normal random variable with variance 1/b, up to some constant we pull out of the integral.

We could approximate R(x) with the polynomial formed by the first few terms, but that’s not enough by itself. We need to approximate the expected value of exp(R(X)), not of R(X). The trick is to create another polynomial and find the expectation of that polynomial applied to X.

We need to find a power series for exp(R(x)) and truncate it after a few terms. We don’t need to analytically find the power series for exp(R(x)) directly before truncating it. We can apply the first few terms of the power series for exp to the first few terms of the power series for R and keep all the terms up to the order we want, say up to order k, giving us a kth order polynomial P(x).

Once we have the polynomial P(x), we can find the expectation P(X) in terms of normal moments (see this post) or alternatively by writing the polynomial as a sum of Hermite polynomials.

 

Laplace approximation of an integral from Bayesian logistic regression

Define

f(x; m, n) = \left( \frac{e^x}{e^x + 1} \right)^m \left( \frac{1}{e^x + 1} \right)^n

and

I(m, n) = \int_{-\infty}^\infty f(x; m, n) \, dx.
This integral comes up in Bayesian logisitic regression with a uniform (improper) prior. We will use this integral to illustrate a simple case of Laplace approximation.

The idea of Laplace approximation is to approximate the integral of a Gaussian-like function by the integral of a (scaled) Gaussian with the same mode and same curvature at the mode. That’s the most common, second order Laplace approximation. More generally, Laplace’s approximation depends on truncating a Taylor series, and truncating after the 2nd order term gives the approximation described above. One can construct higher-order Laplace approximations by keeping more terms of the power series.

Let x0 be the place where f takes on its maximum and let g = log f. Note that g also takes on its maximum at x0 and so its first derivative is zero there. Then the (second order) Laplace approximation has the following derivation.

\begin{eqnarray*} I(m,n) &=& \int_{-\infty}^\infty f(x; m, n) \, dx \\ &=& \int_{-\infty}^\infty \exp(g(x; m, n))\, dx \\ &=& \int_{-\infty}^\infty \exp\left( \sum_{k=0}^\infty \frac{ g^{(k)}(0) }{k!} (x - x_0)^k\right) \, dx \\ &\approx& \int_{-\infty}^\infty \exp\left( g(x_0) + \frac{g''(x_0)}{2} (x - x_0)^2 \right) \, dx \\ &=& \exp(g(x_0)) \sqrt{\frac{2\pi}{| g''(x_0)|}} . \end{eqnarray*}

 

We can show that

\frac{\partial}{\partial x} g(x; m, n) = \frac{m - ne^x}{1 + e^x}

and

\frac{\partial^2}{\partial x^2} g(x; m, n) = -\frac{(m + n)e^x}{(1 + e^x)^2}.

It follows that x0 = log (m/n) and the Laplace approximation for I(m, n) reduces to

 \frac{m^m n^n}{(m+n)^{m+n}} \sqrt{ \frac{2\pi(m+n)}{mn} }.

Now let’s compare the Laplace approximation to the exact value of the integral for a few values of m and n. We would expect the Laplace approximation to be more accurate when the integrand is shaped more like the Gaussian density. This happens when m and n are large and when they are more nearly equal. (The function f is a likelihood, and m+n is the number of data points in the likelihood. More data means the likelihood is more nearly normal. Also, m = n means f is symmetric, which improves the normal approximation.)

We’ll look at (m, n) = (2, 1), (20, 10), (200, 100), and (15, 15). Here’s the plot of f(x, 2, 1) with its normal approximation, scaled vertically so that the heights match.

Laplace approximation plot

Even for small arguments, the Gaussian approximation fits well on the left side, but not so much on the right. For large arguments it would be hard to see the difference between the two curves.

Here are the results of the Laplace approximation and exact integration.

|-----+-----+---------------+---------------+--------------|
|   m |   n |        approx |         exact |    rel error |
|-----+-----+---------------+---------------+--------------|
|   2 |   1 |    0.45481187 |           0.5 | -0.090376260 |
|  20 |  10 |  4.9442206e-9 | 4.99250870e-9 | -0.009672111 |
|  15 |  15 | 8.5243139e-10 | 8.5956334e-10 | -0.008297178 |
| 200 | 100 | 3.6037801e-84 | 3.6072854e-84 | -0.000971728 |
|-----+-----+---------------+---------------+--------------|

The integrals were computed exactly using Mathematica; the results are rational numbers. [1]

Note that when m and n increase by a factor of 10, the relative error goes down by about a factor of 10. This is in keeping with theory that say the error should be O(1/(m+n)). Also, note that the error for (15, 15) is a little lower than that for (20, 10). The errors are qualitatively just what we expected.

***

[1] I initially used Mathematica to compute the integrals, but then realized that’s not necessary. The change of variables u = exp(x) / (1 + exp(x)) shows that

I(ab) = B(ab+1) + B(a+1, b)

where B is the beta function. Incidentally, this shows that if a and b are integers then  I(ab) is rational.