All day long I’d bidi-bidi-bum

This evening I watched my daughter in Fiddler on the Roof. I thought I knew the play pretty well, but I learned something tonight.

Before the play started, someone told me that the phrase “bidi-bidi-bum” in “If I Were a Rich Man” is a Yiddish term for prayer. I thought “All day long I’d bidi-bidi-bum” was a way of saying “All day long I’d piddle around.” That completely changes the meaning of that part of the song.

When I got home I did a quick search to see whether what I’d heard was correct. According to Wikipedia,

A repeated phrase throughout the song, “all day long I’d bidi-bidi-bum,” is often misunderstood to refer to Tevye’s desire not to have to work. However, the phrase “bidi-bidi-bum” is a reference to the practice of Jewish prayer, in particular davening.

Unfortunately, Wikipedia adds a footnote saying “citation needed,” so I still have some doubt whether this explanation is correct. I searched a little more, but haven’t found anything more authoritative.

Now I wonder whether there’s any significance to other parts of the song that I thought were just a form of Klezmer scat singing, e.g. “yubba dibby dibby dibby dibby dibby dibby dum.” I assumed those were nonsense syllables, but is there some significance to them?

Update: At Jason Fruit’s suggestion in the comments, I asked about this on judaism.stackexchange.com. Isaac Moses replied that the answer is somewhere in between. The specific syllables are not meaningful, but they are intended to be reminiscent of the kind of improvisation a cantor might do in singing a prayer.

Ceiling of Complexity

Dan Sullivan coined an interesting term: The Ceiling of Complexity™. (Sullivan has a habit of trademarking™ everything™ he™ says™. I dislike the gratuitous trademarking, but I like the phrase “ceiling of complexity.”)

The idea behind ceiling of complexity is that every project you complete creates residual responsibilities and expectations. This residual may be small, maybe not even noticeable, but it’s always there. Over time, this residue builds up and adds complexity. Eventually it forms a ceiling and limits further progress until you do something to break through the ceiling and reach a new state of simplicity. The ceiling of complexity is a byproduct of success.

Sullivan’s picture of a ceiling of complexity is sort of existential crisis, something an individual would only face a few times over a career, but I find it useful to use the term for less dramatic situations. It gives a way to talk about the gradual accumulation of small responsibilities that become significant in aggregate.

The idea of a ceiling of complexity can be applied to projects as well as to careers. For example, the entropy of a software code base increases over time. Successful projects may have a faster increase in entropy. The software has to maintain backward compatibility because many people depend on its features. Sometimes even its bugs have to be preserved because people rely on them. It’s much easier to renovate software that nobody uses.

Related posts

Einstein on radio

From Albert Einstein’s address to the Seventh German Radio Exhibition at Berlin (1930):

One ought to be ashamed to make use of the wonders of science embodied in a radio set, the while appreciating them as little as a cow appreciates the botanic marvels in the plants she munches.

Source: The Science of Radio by Paul Nahin, first edition

Perl One-Liners Explained

Peteris Krumins has a new book, Perl One-Liners Explained. His new book is in the same style as his previous books on awk and sed, reviewed here and here.

All the books in this series are organized by task. For each task, there is a one-line solution followed by detailed commentary. The explanations frequently offer alternate solutions with varying degrees of concision and clarity. Sections are seldom more than one page long, so the books are easy to read a little at a time.

Programmers who have written a lot of Perl may still learn a few things from Krumins. In particular, those who have primarily written Perl in script files may not be familiar with some of the tricks for writing succinct Perl on the command line.

Other Perl posts

Limiting your options leads to better options

Limiting your options leads to better options.

… when you study the evidence, it’s clear that you’re not likely to encounter real interesting opportunities in your life until after you’re really good at something.

If you avoid focus because you want to keep your options open, you’re likely accomplishing the opposite. Getting good is a prerequisite to encountering options worth pursuing.

From Closing your interests opens more interesting opportunities.

Related post: Demonstrating persistence

How to compute jinc(x)

The function jinc(x) that I wrote about yesterday is almost trivial to implement, but not quite. I’ll explain why it’s not quite as easy as it looks and how one might implement it in C and Python.

The function jinc(x) is defined as J1(x) / x, so if you have code to compute J1 then it ought to be a no-brainer. For example, why not use the following C code?

    #include <math.h>
    double jinc(double x) {
        return j1(x) / x;
    }

The problem is that if you pass in 0, the code will divide by 0 and return a NaN. The function jinc(x) is defined to be 1/2 at x = 0 because that’s the limit of J1(x)(x) / x as x goes to 0. So we try again:

    #include <math.h>
    double jinc(double x) {
        return (x == 0.0) ? 0.5 : j1(x) / x;
    }

Does that work? Technically, it could still fail — we’ll come back to that at the end — but we’ll assume for now that it’s OK.

We could write the analogous Python code, and it would be adequate as long as we’re only calling the function with scalars and not NumPy arrays.

    from scipy.special import j1
    def jinc(x):
        if x == 0.0:
            return 0.5
        return j1(x) / x

Now suppose you want to plot this function. You create an array of points, say

    x = np.linspace(-1, 1, 25)

and plot jinc(x). You’ll get a warning: “ValueError: The truth value of an array with one element is ambiguous. Use a.any() or a.all().” Incidentally, if we called linspace with an even integer in the last argument, our array of points would avoid zero and the naive implementation of jinc would work.

When Python tries to apply jinc to an array, it doesn’t know how to interpret the test x == 0. The warning suggests “Do you mean if any component of x is 0? Or if all components of x are 0?” Neither option is what we want. We want to apply jinc as written to each element of x. We could do this by calling the vectorize function.

    jinc = np.vectorize(jinc)

This replaces our original jinc function with one that handles NumPy arrays correctly.

There is an extremely unlikely scenario in which the code above could fail. The value of J1(x) is approximately x/2 for small values of x. If the floating point value x is so small that 0.5*x returns 0, our function will return 0, even though it should return 0.5. The C code above works for values of x as small as DBL_MIN and even values much smaller. (DBL_MIN is not the smallest value of a double, only the smallest normalized double.) But if you set

    x = DBL_MIN / pow(2.0, 52);

then jinc(x) will return 0. If you want to be absolutely safe, you could change the implementation to

    #include <math.h>
    double jinc(double x) {
        return (fabs(x) < 1e-8) ? 0.5 : j1(x) / x;
    }

Why test for whether the absolute value is less than 10-8 rather than a much smaller number? For small x, the error in approximating jinc(x) with 1/2 is on the order of x2/16. So for x as large as 10-8, the approximation error is below the resolution of a double. As a bonus, the function jinc(x) will be more efficient for |x| < 10-8 since it avoids a call to j1.

Related posts

Jinc function

This afternoon I ran across the jinc function for the first time.

The sinc function is defined by

\mbox{inc}(t) = \frac{\sin(t)}{t}

The jinc function is defined analogously by

\mbox{jinc}(t) = \frac{J_1(t)}{t}

where J1 is a Bessel function. Bessel functions are analogous to sines, so the jinc function is analogous to the sinc function.

Here’s what the sinc and jinc functions look like.

The jinc function is not as common as the sinc function. For example, both Mathematica and SciPy have built-in functions for sinc but not for jinc. [There are actually two definitions of sinc. Mathematica uses the definition above, but SciPy uses sin(πt)/πt. The SciPy convention is more common in digital signal processing.]

As I write this, Wikipedia has an entry for sinc but not for jinc. Someone want to write one?

For small t, jinc(t) is approximately cos(t/2) / 2. This approximation has error O(t4), so it’s very good for small t, useless for large t.

For large values of t, jinc(t) is like a damped, shifted cosine. Specifically,

\mbox{jinc}(t) \sim \cos\left( |t| - \frac{3\pi}{4}\right) \sqrt{\frac{2}{\pi |t|^3}}

with an error that decreases like O( |t|-2 ).

Like the sinc function, the jinc function has a simple Fourier transform. Both transforms are zero outside the interval [-1, 1]. Inside this interval, the transform of sinc is a constant, √(π/8). On the same interval, the transform of jinc is √(2/π) √(1 – ω2).

More Bessel function posts

The universal solvent of statistics

Andrew Gelman just posted an interesting article on the philosophy of Bayesian statistics. Here’s my favorite passage.

This reminds me of a standard question that Don Rubin … asks in virtually any situation: “What would you do if you had all the data?” For me, that “what would you do” question is one of the universal solvents of statistics.

Emphasis added.

I had not heard Don Rubin’s question before, but I think I’ll be asking it often. It reminds me of Alice’s famous dialog with the Cheshire Cat:

“Would you tell me, please, which way I ought to go from here?”

“That depends a good deal on where you want to get to,” said the Cat.

“I don’t much care where–” said Alice.

“Then it doesn’t matter which way you go,” said the Cat.

Cheshire Cat

Related post: Irrelevant uncertainty

Feynman on imagining electromagnetic waves

Richard Feynman on imagining electromagnetic waves:

I’ll tell you what I see. I see some kind of vague showy, wiggling lines  — here and there an E and a B written on them somehow, and perhaps some of the lines have arrows on them — an arrow here or there which disappears when I look too closely at it. When I talk about the fields swishing through space, I have a terrible confusion between the symbols I use to describe the objects and the objects themselves. I cannot really make a picture that is even nearly like the true waves. So if you have difficulty making such a picture, you should not be worried that your difficulty is unusual.

From The Feynman Lectures on Physics, volume II.

Other Feynman posts