I had not seen one of these old saxophones until Carlo Burkhardt sent me photos today of aÂ Pierret Modele 5 Tenor Sax from around 1912.

Here’s a closeup of the octave keys.

And here’s a closeup of the bell where you can see the branding.

]]>Musical notes go around in a circle. After 12 half steps we’re back where we started. What would it sound like if we played intervals that went around this circle at golden angles? I’ll include audio files and the code that produced them below.

A golden interval, moving around the music circle by a golden angle, is a little more than a major third. And so a chord made of golden intervals is like an augmented major chord but stretched a bit.

An augmented major triad divides the musical circle exactly in thirds. For example, C E G#. Each note is four half steps, a major third, from the previous note. In terms of a circle, each interval is 120Â°. Here’s what these notes sound like in succession and as a chord.

(Download)

If we go around the musical circle in golden angles, we get something like an augmented triad but with slightly bigger intervals. In terms of a circle, each note movesÂ 137.5Â° from the previous note rather than 120Â°. Whereas an augmented triad goes around the musical circle at 0Â°, 120Â°, and 240Â° degrees, a golden triad goes around 0Â°, 137.5Â°, and 275Â°.

A half step corresponds to 30Â°, so a golden angle corresponds to a little more than 4.5 half steps. If we start on C, the next note is between E and F, and the next is just a little higher than A.

If we keep going up in golden intervals, we do not return to the note we stared on, unlike a progression of major thirds. In fact, we never get the same note twice because a golden interval is not a rational part of a circle. Four golden angle rotations amount to 412.5Â°, i.e. 52.5Â° more than a circle. In terms of music, going up four golden intervals puts us an octave and almost a whole step higher than we stared.

Here’s what a longer progression of golden intervals sounds like. Each note keeps going but decays so you can hear both the sequence of notes and how they sound in harmony. The intention was to create something like playing the notes on a piano with the sustain pedal down.

(Download)

It sounds a little unusual but pleasant, better than I thought it would.

Here’s the Python code that produced the sound files in case you’d like to create your own. You might, for example, experiment by increasing or decreasing the decay rate. Or you might try using richer tones than just pure sine waves.

from scipy.constants import golden import numpy as np from scipy.io.wavfile import write N = 44100 # samples per second # Inputs: # frequency in cycles per second # duration in seconds # decay = half-life in seconds def tone(freq, duration, decay=0): t = np.arange(0, duration, 1.0/N) y = np.sin(freq*2*np.pi*t) if decay > 0: k = np.log(2) / decay y *= np.exp(-k*t) return y # Scale signal to 16-bit integers, # values between -2^15 and 2^15 - 1. def to_integer(signal): signal /= max(abs(signal)) return np.int16(signal*(2**15 - 1)) C = 262 # middle C frequency in Hz # Play notes sequentially then as a chord def arpeggio(n, interval): y = np.zeros((n+1)*N) for i in range(n): x = tone(C * interval**i, 1) y[i*N : (i+1)*N] = x y[n*N : (n+1)*N] += x return y # Play notes sequentially with each sustained def bell(n, interval, decay): y = np.zeros(n*N) for i in range(n): x = tone(C * interval**i, n-i, decay) y[i*N:] += x return y major_third = 2**(1/3) golden_interval = 2**(golden**-2) write("augmented.wav", N, to_integer(arpeggio(3, major_third))) write("golden_triad.wav", N, to_integer(arpeggio(3, golden_interval))) write("bell.wav", N, to_integer(bell(9, golden_interval, 6)))]]>

My choir director persuaded me to try anyway, with justÂ a few days before auditions. That wasn’t enough time for me to learn the music with all its strange intervals. But I tried out. I sang the whole thing. As awful as it was, I kept going. ItÂ was about as terrible as it could be, just good enough to not be funny. I wanted to walk out, and maybe I should have out of compassion for the judges, but I stuck it out.

I was proud of that audition, not as a musical achievement, but because I powered through something humiliating.

I did better in band than in choir. I made Area in band and tried out for State but didn’t make it. I worked hard for that one and did a fair job, but simply wasn’t good enough.

That turned out well. It was my senior year, and I was debating whether to major in math or music. I’d told myself that if I made State, I’d major in music. I didn’t make State, so I majored in math and took a few music classes for fun. We can never know how alternative paths would have worked out, but it’s hard to imagine that I would haveÂ succeeded as a musician. I didn’t have the talent or the temperament for it.

When I was in college I wondered whether I should have done something like acoustical engineering as a sort of compromise between math and music. Â I could imagine that working out. Years later I got a chance to do some work in acoustics and enjoyed it, but I’m glad I made a career of math. Applied math has given me the chance to work in a lot of different areasâ€”to play in everyone else’s back yard, as John Tukey put itâ€”and I believe it suits me better than music or acoustics would have.

]]>The University of Texas band’sÂ half time show that year was a beautiful tribute to the fallen A&M students.

]]>Amplitude modulation multiplies a carrier signal by

1 + *d* sin(2Ï€Â *f t*)

where *d* is the modulation depth, *f* is the modulation frequency, and *t* is time.

Here are some examples you can listen to. We use a pure 1000 Hz tone and Gaussian white noise as carriers, and vary modulation depth and frequency continuously over 10 seconds. he modulation depth example varies depth from 0 to 1. Modulation frequency varies from 0 to 120 Hz.

First, here’s a pure tone with increasing modulation depth.

Next we vary the modulation frequency.

Now we switch over to Gaussian white noise, first varying depth.

And finally white noise with varying modulation frequency. This one sounds like a prop-driven airplane taking off.

**Related**: Psychoacoustics consulting

Fluctuation strength reaches its maximum at a modulation frequency of around 4 Hz. For much higher modulation frequencies, one perceives roughness rather than fluctuation. The reference value for one vacil is a 1 kHz tone, fully modulated at 4 Hz, at a sound pressure level of 60 decibels. In other words

(1 + sin(8Ï€*t*)) sin(2000Ï€*t*)

where *t* is time in seconds.

Since the carrier frequency is 250 times greater than the modulation frequency, you can’t see both in the same graph. In this plot, the carrier is solid blue compared to the modulation.

Here’s what the reference for one vacil would sound like:

**See also**: What is an asper?

(1 + sin(140Ï€*t*)) sin(2000Ï€*t*)

where *t* is time in seconds.

Here’s what that sounds like (if you play this at 60 dB, about the loudness of a typical conversation at one meter):

And here’s the Python code that made the file:

from scipy.io.wavfile import write from numpy import arange, pi, sin, int16 def f(t, f_c, f_m): # t = time # f_c = carrier frequency # f_m = modulation frequency return (1 + sin(2*pi*f_m*t))*sin(2*f_c*pi*t) def to_integer(signal): # Take samples in [-1, 1] and scale to 16-bit integers, # values between -2^15 and 2^15 - 1. return int16(signal*(2**15 - 1)) N = 48000 # samples per second x = arange(3*N) # three seconds of audio # 1 asper corresponds to a 1 kHz tone, 100% modulated at 70 Hz, at 60 dB data = f(x/N, 1000, 70) write("one_asper.wav", N, to_integer(data))

**See also**: What is a vacil?

This afternoon I was working on a project involving tonal prominence. I stepped away from the computer to thinkÂ and was interrupted by the sound of a leaf blower. I was annoyed for a second, then I thought “Hey, a leaf blower!” and went out to record it. A leaf blower is a great example of a broad spectrum noise with strong tonal components. Lawn maintenance men think you’re kinda crazy when you say you want to record the noise of their equipment.

The tuner app on my phone identified the sound as an A3, the A below middle C, or 220 Hz. Apparently leaf blowers are tenors.

Here’s a short audio clip:

And here’s what the spectrum looks like. The dashed grey vertical lines are at multiples of 55 Hz.

The peaks are perfectly spaced at multiples of the fundamental frequency of 55 Hz, A1 in scientific pitch notation. This even spacing of peaks is the fingerprint of a definite tone. There’s also a lot of random fluctuationÂ between peaks. That’s the finger print of noise. So together we hear a pitch and noise.

When using the tone-to-noise ratio algorithm from the ECMA-74, only the spike at 110 Hz is prominent. A limitation of that approach is that it only considers single tones, not how well multiple tones line up in a harmonic sequence.

**Related posts**:

]]>

Loudness is the psychological counterpart to sound pressure level. Sound pressure level is a physical quantity, but loudness is a psychoacoustic quantity. The former has to do with how a microphone perceives sound, the latter how a human perceives sound. Sound pressure level in dB and loudness in phon are roughly the same for a pure tone of 1 kHz. But loudness depends on the power spectrum of a sound and not just it’s sound pressure level. For example, if a sound’s frequency is too high or too low to hear, it’s not loud at all! See my previous post on loudness for more background.

Let’s take the four guitar sounds from the previous post and scale them so that each has a sound pressure level of 65 dB, about the sound level of an office conversation, then rescale so the sound pressure is 90 dB, fairly loud though not as loud as a rock concert. [Because sound perception is so nonlinear,Â amplifyingÂ a sound does not increase the loudness or sharpness of every component equally.]

Here are the audio files from the previous post:

Clean note:

Clean chord:

Distorted note:

Distorted chord:

Here’s the loudness, measured in phons, at both sound pressure levels.

|-----------------------+-------+-------| | Sound | 65 dB | 90 dB | |-----------------------+-------+-------| | Clean note | 70.9 | 94.4 | | Clean chord | 71.8 | 95.3 | | Note with distortion | 81.2 | 103.7 | | Chord with distortion | 77.0 | 99.6 | |-----------------------+-------+-------|

While all four sounds have the same sound pressure level, the undistorted sounds have the lowest loudness. The distorted sounds are louder, especially the single note. Increasing the sound pressure level from 65 dB to 90 dB increases the loudness of each sound by roughly the same amount. This will not be true of sharpness.

Sharpness is related how much a sound’s spectrum is inÂ the high end. You can compute sharpness as a particular weighted sum of the specific loudness levels in various bands, typically 1/3-octave bands. ThisÂ weight function that increases rapidly toward the highest frequency bands. For more details, see Psychoacoustics: Facts and Models.

The table below gives sharpness, measured in acum,Â for the four guitar sounds at 65 dB and 90 dB.

|-----------------------+-------+-------| | Sound | 65 dB | 90 dB | |-----------------------+-------+-------| | Clean note | 0.846 | 0.963 | | Clean chord | 0.759 | 0.914 | | Note with distortion | 1.855 | 2.000 | | Chord with distortion | 1.281 | 1.307 | |-----------------------+-------+-------|

Although a clean chord sounds a little louder than a single note, the former is a little sharper. Distortion increases sharpness as it does loudness. The single note with distortion is a little louder than the other sounds, but much sharper than the others.

Notice that increasing the sound pressure level increases the sharpness of the sounds by different amounts. The sharpness of the last sound hardly changes.

The other day I asked on Google+ if someone could make an audio clip for me and Dave Jacoby graciously volunteered. I wanted a simple chord on an electric guitar played with varying levels of distortion. Dave describes the process of making the recording as

Fender Telecaster -> EHX LPB clean boost -> Washburn Soloist Distortion (when engaged) -> Fender Frontman 25R amplifier -> iPhone

Let’s look at the Fourier spectrum at four places in the recording: single note and chord, clean and distorted. These are a 0:02, 0:08, 0:39, and 0:43.

The first note, without distortion,Â has most of it’s spectrum concentrated at 220 Hz, the A below middle C.

The same note with distortion has aÂ power spectrum that decays much slow, i.e. the sound has more high frequency components.

Here’sÂ the A major chord without distortion. Note that since the threshold of hearing is around 20 dB, most of the noise components are inaudible.

Here’s the same chord with distortion. Notice there’s much more noise in the audible range.

**Update**: See the next post an analysis ofÂ the loudness and sharpness of the audio samples in this post.

Photo via Brian RobertsÂ CC

Here’s a little online calculator toÂ convert between Hz, Bark, and music notation. You can enter one of the three and it will compute the other two.

]]>

KettledrumsÂ (a.k.a.Â tympani) produce a definite pitch,Â but in theory they should not. At least the simplest mathematical model of a kettledrum would not have a definite pitch. Of course there are more accurate theories that align with reality.

**Unlike many things that work in theory but not in practice, kettledrums work in practice but not in theory**.

A musical sound has a definite pitch when the first several Fourier components are small integer multiples of the lowest component, the fundamental. A pitch we hear at 100 Hz would have a first overtone at 200 Hz, the second at 300 Hz, etc. It’s the relative strengths of these components give each instrument its characteristic sound.

An ideal string would make a definite pitch when you pluck it. The features of a real string discarded for the theoretical simplicity, such as stiffness, don’t make a huge difference to the tonality of the string.

An ideal circular membrane would vibrate at frequencies that are much closer together than consecutive integer multiples of the fundamental. TheÂ first fewÂ frequencies would be atÂ 1.594, 2.136, 2.296, 2.653, and 2.918 times the fundamental. Here’s what that would sound like:

(download)

I chose amplitudes of 1, 1/2, 1/3, 1/4, 1/5, and 1/6. This was somewhat arbitrary, but not unrealistic. Including more than the first six Fourier components would make the sound even more muddled.

By comparison, here’s what it would sound like with the components at 2x up to 6x the fundamental, using the same amplitudes.

(download)

This isn’t an accurate simulation of tympani sounds, just something simple but more realistic than the vibrations of an idea membrane.

The real world complications of a kettledrum spread out its Fourier components to make it have a more definite pitch. These include the weight of air on top of the drum, the stiffness of the drum head, the air trapped in the body of the drum, etc.

If you’d like to read more about how kettle drums work, you might start with The Physics of Kettledrums by Thomas Rossing in Scientific American, November 1982.

]]>In an earlier post I wrote about how beats come upÂ in vibrating systems, such as a massÂ andÂ spring combination or an electric circuit. Here I look at examples from music and radio.

When two musical instruments play nearly the same note, they produce beats. The number of beats per second is the difference in the two frequencies. So if two flutes are playing an A, one playing at 440 Hz and one at 442 Hz, you’ll hear a pitch at 441 Hz that beats two times a second. Here’s a wave file of two pure sine waves at 440 Hz and 442 Hz.

As the players come closer to being in tune, the beats slow down. Sometimes you don’t have two instruments but two strings on the same instrument. Guitarists listen for beats to tell when two strings are playing the same note with the same pitch.

The same principle applies to AM radio. A message is transmitted by multiplying a carrier signal by the content you want to broadcast. The beats are the content. As we’ll see below, in some ways the musical example and the AM radio example are opposites. With tuning, we start with two sources and create beats. With AM radio, we start by creating beats, then see that we’ve created two sources, the sidebands of the signal.

Both examples above relate to the following trig identity:

cos(*a*–*b*) + cos(*a*+*b*) = 2 cos *a* cos *b*

And because we’re looking at time-varying signals, slip in a factor ofÂ 2Ï€*t*:

cos(2Ï€(*a*–*b*)*t*) + cos(2Ï€(*a*+*b*)*t*) = 2 cos 2Ï€*at* cos 2Ï€*bt*

In the case of two pure tones, slightly out of tune, letÂ *a* = 441 andÂ *b* = 1. Playing an A 440 and an A 442 at the same time results in an A 441, twice as loud, with the amplitude going up and down like cos 2Ï€*t*, i.e. oscillating two times a second. (Why two times and not just once? One beat for the maximum and and one for the minimumÂ of cos 2Ï€*t*.)

It may be hard to hear beats because ofÂ complications we’ve glossed over. Musical instruments are never perfectly in phase, but more importantly they’re not pure tones. An oboe, for example, has strong componentsÂ above the fundamental frequency. I used a flute in this example because although its tone is not simply a sine wave, it’s closer to a sine wave than other instruments, especially when playing higher notes. Also, guitarists often compare the *harmonics*Â of two strings. These are purer tones and so it’s easier to hear beats between them.

For the case of AM radio, read the equation above from right to left. LetÂ *a* be the frequency of the carrier wave. For example if you’re broadcasting on AM station 700, this means 700 kHz, soÂ *a* = 700,000. If this station were broadcasting a pure tone at 440 Hz, *b* would be 440. This would produce sidebands at 700,440 Hz and 699,560 Hz.

In practice, however, the carrier is not multiplied by a signal likeÂ cos 2Ï€*bt* but by 1 +Â *m*Â cos* 2Ï€bt* where |*m*| < 1 to avoid *over-modulation*. Without this extra factor of 1 the signal would be 100% modulated; the envelope of the signal would pinch all the way down to zero. By including the factor of 1 and using a modulation indexÂ *m* less than 1, the signal looks more like the image above, with the envelope not pinching all the way down. (Over-modulation occurs whenÂ *m* > 1. Instead of the envelope pinching to zero, the upper and lower parts of the envelop cross.)

**Related posts**:

When youÂ press the octave key on the back of a saxophone with your left thumb, the pitch goes up an octave.Â Sometimes this causes aÂ key on the neckÂ to open up and sometimes it doesn’t [2]. I knew that much.

I thought that when this key didn’t open, theÂ octaves work like they do on a flute: no mechanical change to the instrument, but a change in the way you play. AndÂ to some extent this is right:Â You can make the pitch go up an octave without using the octave key. However, when the octave key is pressed there is a second hole that opens up when the more visible one on the neck closes.

According to the podcast, the first saxophones had two octave keys to operate with your thumb. You had to choose the correct octave key for the note you’re playing. Modern saxophonesÂ workÂ the same as early saxophones except there is only one octave key controlling two octave holes.

* * *

[1]Â Musical Acoustics from The University of Edinburgh, iTunes U.

[2] On the notes written middle C up to A flat, the octave key raises the little holeÂ I wasn’t aware of. For higher notes the octave key raises the octave hole on the neck.

]]>

In **scientific pitch notation**, the C nearÂ the threshold of hearing, around 16 Hz, is called C0. The C an octave higher is C1, the next C2, etc.Â Octaves begin with C; other notes use the octave number ofÂ the closest C below.

The lowest note on a piano is A0, aÂ major sixth up fromÂ C0. Middle C is C4 because it’s 4 octaves above C0. The highest note on a piano is C8.

A4, the A above middle C, has a frequency of 440 Hz. This is nine half steps above C4, so the pitch of C4 is 440*2^{-9/12}. C0 is four octaves lower, so it’s 2^{-4} =Â 1/16 of the pitch of C4. (Details for this calculation and the one below are given inÂ here.)

ForÂ a pitchÂ P, the number of half steps from C0 to P is

*h* = 12 log_{2}(*P* / C0).

Here is a page that will let you convert back and forth between frequency and music notation: Music, Hertz, Barks.

If you’d like code rather than just to do one calculation, see the Python code below. ItÂ calculates the number of half stepsÂ *h*Â from C0 up to a pitch, then computes the corresponding pitch notation.

from math import log2, pow A4 = 440 C0 = A4*pow(2, -4.75) name = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"] def pitch(freq): h = round(12*log2(freq/C0)) octave = h // 12 n = h % 12 return name[n] + str(octave)

The pitch for A4 is its own variable in case you’d like to modify the code for a different tuning. While 440 is common, it used to be lower in the past, and you’ll sometimes see higher values like 444 today.

IfÂ you’d like to port this code to a language that doesn’t have a `log2`

function, you can use `log(x)/log(2)`

for `log2(x)`

.

When scientific pitch notation was first introduced, C0 was defined to be exactly 16 Hz, whereas now it works out to around 16.35.Â The advantage of the original system is that all C’s have frequency a power of 2, i.e. C*n* has frequency 2^{n+4} Hz.Â The formula above for the number of half steps a pitch is above C0 simplifies to

*h* = 12 log_{2}*P*Â – 48.

If C0 has frequency 16 Hz, the A above middle C has frequencyÂ 2^{8.75} = 430.54, aÂ littleÂ flat compared to A 440. But using the A 440 standard, C0 = 16 Hz is a convenient and fairly accurate approximation.

This decomposition is unique if you impose the extra requirement that consecutive Fibonacci numbers are not allowed.Â [1] It’s easy to seeÂ that the rule against consecutive Fibonacci numbers is necessaryÂ for uniqueness.Â It’s not as easy to see that the rule is sufficient.

Every Fibonacci number is itself the sum of two consecutive Fibonacci numbersâ€”that’s how they’re definedâ€”so clearly there are at least two ways to write a Fibonacci number as the sum of Fibonacci numbers, either just itself or its two predecessors. In the example above, 8 = 5 + 3 and so you could write 10 as 5 + 3 + 2.

The *n*th Fibonacci number is approximately Ï†^{n}/âˆš5 where Ï† = 1.618â€¦ is the golden ratio. So you could think of a Fibonacci sum representation for *x* as roughly a base Ï† representation for âˆš5*x*.

You can find the Fibonacci representation of a number *x* using a greedy algorithm: Subtract the largest Fibonacci number from *x* that you can, then subtract the largest Fibonacci number you can from the remainder, etc.

Programming exercise: How would you implement a function that finds the largest Fibonacci number less than or equal to its input? Once you have this it’s easy to write a program to find Fibonacci representations.

* * *

[1] This is known as Zeckendorf’s theorem, published by E. Zeckendorf in 1972. However, C. G. Lekkerkerker had published the same result 20 years earlier.

]]>The screen shot above comes from a tuner app taken when I was around some electrical equipment. The pitch sometimes registered at A# and sometimes as B, and for good reason. In a previous post I derived the formula for converting frequencies to musical pitches:

*h* = 12 log(*P* / *C*) / log 2.

Here *C* is the pitch of middle C, 261.626 Hz, *P* is the frequency of your tone, and *h* is the number of half steps your tone is above middle C. When we stick P = 60 Hz into this formula, we get *h* = -25.49, so our electrical hum is half way between 25 and 26 half-steps below middle C. So that’s between a A# and a B two octaves below middle C.

For 50 Hz hum, *h* = -28.65. That would be between a G and a G#, a little closer to G.

**Update**: So why would the frequency of the sound match the frequency of the electricity? The magnetic fields generated by the current would push and pull parts, driving mechanical vibrations at the same frequency.

**Related**: Accoustics consulting

When you go up a fifth (seven half steps) you add a sharp. For example, the key of C has no sharps or flats, G has one sharp, D has two, etc. Starting from C and adding 30 sharps means going up 30*7 half-steps. Musical notes operate modulo 12 since there are 12 half-steps in an octave. 30*7 is congruent to 6 modulo 12, and six half-steps up from C is F#. So the key with 30 sharps would be the same pitches as F#.

But the key wouldn’t be called F#. It would be D quadruple sharp! I’ll explain below.

Sharps are added in the order F, C, G, D, A, E, B, and the name of key is a half step higher than the last sharp. For example, the key with three sharps is A, and the notes that are sharp are F#, C#, and G#.

In the key of C#, all seven notes are sharp. Now what happens if we add one more sharp? We start over and start adding more sharps in the same order. F was already sharp, and now it would be double sharp. So the key with eight sharps is G#. Everything is sharp except F, which is double sharp.

In a key with 28 sharps, we’ve cycled through F, C, G, D, A, E, and B four times. Everything is quadruple sharp. To add two more sharps, we sharpen F and C one more time, making them quintuple sharp. The note one half-step higher than C quintuple sharp is D quadruple sharp, which is enharmonic with F#.

You could repeat this exercise with flats. Going up a forth (five half-steps) adds a flat. Or you could think of a flat as a negative sharp.

**Related posts**:

Let *P* be the frequency of some pitch you’re interested in and let *C* = 261.626 be the frequency of middle C. If *h* is the number of half steps from *C* to *P* then

*P* / *C* = 2^{h/12}.

Taking logs,

*h* = 12 log(*P* / *C*) / log 2.

If *P* = 85, then *h* = -19.46. That is, James Earl Jones’ voice is about 19 half-steps below middle C, around the F an octave and a half below middle C.

More details on the derivation above here.

There’s a page to do this calculation for you here. You can type in frequency or pitch and get the other back.

(The page also gives pitch on a Bark scale, something you may not care about that is importantÂ in psychoacoustics.)

]]>

DSP_fact is for DSP, digital signal processing: filters, Fourier analysis, convolution, sampling, wavelets, etc.

MusicTheoryTip is for basic music theory with a little bias toward jazz. It’ll tweet about harmony, scales, tuning, notation, etc.

Here’s a full list of my 15 daily tip twitter accounts.

If you’re interested in one of these accounts but don’t use Twitter, you can subscribe to a Twitter account via RSS just as you’d subscribe to a blog.

If you’re using Google Reader to subscribe to RSS feeds, you’ll need to switch to something else by July 1. Here are 18 alternatives.

]]>Source: Radiolab

Before the play started, someone told me that the phrase “bidi-bidi-bum” in “If I Were a Rich Man” is a Yiddish term for prayer. I thought “All day long I’d bidi-bidi-bum” was a way of saying “All day long I’d piddle around.” That completely changes the meaning of that part of the song.

When I got home I did a quick search to see whether what I’d heard was correct. According to Wikipedia,

A repeated phrase throughout the song, “all day long I’d bidi-bidi-bum,” is often misunderstood to refer to Tevye’s desire not to have to work. However, the phrase “bidi-bidi-bum” is a reference to the practice of Jewish prayer, in particular davening.

Unfortunately, Wikipedia adds a footnote saying “citation needed,” so I still have some doubt whether this explanation is correct. I searched a little more, but haven’t found anything more authoritative.

Now I wonder whether there’s any significance to other parts of the song that I thought were just a form of Klezmer scat singing, e.g. “yubba dibby dibby dibby dibby dibby dibby dum.” I assumed those were nonsense syllables, but is there some significance to them?

**Update**: At Jason Fruit’s suggestion in the comments, I asked about this on judaism.stackexchange.com. Isaac Moses replied that the answer is somewhere in between. The specific syllables are not meaningful, but they are intended to be reminiscent of the kind of improvisation a cantor might do in singing a prayer.

2012 is also prime as a base-five number.

Update: Here’s some Mathematica code to find other bases where 2012 is prime.

f[n_] := 2 n^3 + n + 2 For[n = 3, n < 100, n++, If[PrimeQ[f[n]], Print[n]]]

**Related posts**:

Kenny G’s philosophy is as shallow as his music.

I just play for myself, the way I want to play, and it comes out sounding like me.

Coltrane’s philosophy, like his music, is more ambitious.

Overall, I think the main thing a musician would like to do is give a picture to the listener of the many wonderful things he knows and senses in the universe. That’s what music is to me â€” it’s just another way of saying this is a big, wonderful universe we live in, that’s been given to us, and here’s an example of just how magnificent and encompassing it is. That’s what I would like to do. I think that’s one of the greatest things you can do in life, and we all try to do it in some way. The musician’s is through his music.

As Groothuis comments, Kenny G only spoke of expressing *himself*, while Coltrane “expressed a yearning to represent objective realities musically.”

After one utterly extraordinary rendition of “A Love Supreme,” Coltrane stepped off the stage, put down his saxophone, and said simply “

Nunc dimittis.” â€¦ Coltrane felt he could never play the piece more perfectly. If his whole life had been lived for that passionate thirty-two minute jazz prayer, it would have been worth it. He was ready to go.

*Nunc dimittis* is Latin for “Now dismiss.” These are the opening words of the Vulgate translation of the Song of Simeon, Luke 2:29â€“32. Simeon says he is ready to die because he has seen what he was waiting for, the promised Messiah.

Lord, now lettest thou thy servant depart in peace, according to thy word:

For mine eyes have seen thy salvation,

Which thou hast prepared before the face of all people;

A light to lighten the Gentiles, and the glory of thy people Israel.

Coltrane’s story brings several things to mind. First, it is awe-inspiring to imagine an accomplishment so fulfilling that you would say “That was it. I’m ready to die.”

Next, it’s interesting to ponder Coltrane’s eclectic spirituality. I knew Christianity was part of his spiritual gumbo, but I was surprised to hear that he made a spontaneous reference to Latin liturgy.

Coltrane was canonized by the African Orthodox Church in 1982. Truth is stranger than fiction.

Finally, I was interested in the name *Nunc dimittis* itself. I hadn’t heard it before. (I’ve only been part of non-liturgical churches.) I thought the name might only be familiar to Catholics, being a Latin term. But an Episcopalian friend informed me that the Anglican mass preserves many Latin titles even though the liturgy itself is in English. I suppose Coltrane encountered this Anglican name via the Episcopalian influence on the African Methodist Episcopalian Zion Church.

**Closely related post**:

**Less related posts**:

Software sins of omission (Software and the Book of Common Prayer)

Doing good work with bad tools (Charlie Parker story)

Dave Brubeck mass (Mass composed by a jazz icon)

See also March in 7/4 time and Blue Rondo Ã la Turk.

]]>However, Sivers didn’t go through his entire education this way. He finished his degree in 2.5 years, but at the rate he started he could have finished in under a semester. Obviously he wasn’t able to blow through everything as fast as music theory.

Some classes compress better than others. Theoretical classes condense better than others. A highly motivated student could learn a semester of music theory or physics in a short amount of time. But it would take longer to learn a semester of French or biology no matter how motivated you are because these courses can’t be summarized by a small number of general principles. And while Sivers learned basic music theory in three hours, he says it took him 15 years to learn how to sing.

Did Sivers’ mentor expose him to everything students taking music theory classes are exposed to? Probably not. But apparently Sivers did learn the most important material, both in the opinion of his mentor and in the opinion of the people who created the placement exams. His mentor not only taught him a lot of ideas in a short amount of time, he also told him when it was time to move on to something else.

It’s hard to say when you’ve learned something. Any subject can be explored in infinite detail. But there comes a point when you’ve learned a subject well enough. Maybe you’ve learned it to your personal satisfaction or you’ve learned it well enough for an exam. Maybe you’ve reached diminishing return on your efforts or you’ve learned as much as you need to for now.

One way to greatly speed up learning is to realize when you’ve learned enough. A mentor can say something like “You don’t know everything, but you’ve learned about as much as you’re going to until you get more experience.”

Occasionally I’ll go from feeling I don’t understand something to feeling I do understand it in a moment, and not because I’ve learned anything new. I just realize that maybe I *do *understand it after all. It’s a feeling like eating a meal quickly and stopping before you feel full. A few minutes later you feel full, not because you’ve eaten any more, but only because your body realizes you’re full.

**Related posts**:

**Odd meters**

Music in 5/4 time

Blue Rondo Ã la Turk

March in 7/4 time

**Music and computers**

Typesetting music in LaTeX with LilyPond

Windows XP and Ubuntu start-up music

**Music and math**

Opening chord of “A Hard Days Night”

Circle of fifths and number theory

Circle of fifths and roots of two

Logarithms, music, and arsenic

Calendars, Connections, and Cats