Shawna Kennedy left a comment on my previous post on music in odd meters that made something click. She pointed out that in Turkish and Romany music, 9/8 is often divided as 2+2+2+3, unlike the Western triple-triple feel (3+3+3). That style of 9/8 music would be an “odd meter” while other 9/8 music would not. When I read her comment about “1-2, 1-2, 1-2, 1-2-3,” Dave Brubeck’s tune Blue Rondo à la Turk started playing in my head. I love that song. I first heard it over 20 years ago and I still whistle it fairly often. My kids probably recognize the tune even though they haven’t heard the recording.
Now I finally get what “à la Turk” means. It must be a reference to the Turkish rhythm of the 9/8 theme. You can hear a short excerpt of Blue Rondo à la Turk at here.
Update: The article on Blue Rondo in Wikipedia says that it was based on Mozart’s Rondo alla Turca. I listened to Mozart’s rondo. It’s a famous tune — you’d probably recognize it — but I didn’t know it by name. I would never have drawn a connection between the Mozart rondo and Brubeck’s rondo. Maybe the Wikipedia article is wrong, or maybe Brubeck’s imagination moved pretty far from his inspiration.
Time signatures in music are written like fractions. The numerator tells how the beats are grouped into measures. For the vast majority of Western music in every genre — popular, classical, jazz, country, etc. — this numerator is divisible by 2 or 3, but hardly ever by any other prime numbers. Musicians call exceptional time signatures “odd meters” though this is misleading. When they say “odd” they mean “odd numbers other than powers of 3.” For example, musicians would not call 9/8 and odd meter, but they would call 7/8 or 11/8 odd.
The most popular piece of music by far written in 5/4 time was Dave Brubeck’s Take Five. It sold over a million records in 1961 and continues to be popular 50 years after it was written. Here’s a video of the Brubeck Quartet performing Take Five in 1966.
[Update: video removed]
And here is a mind-bending mash up of Take Five by Radiohead. (Thanks to @explicitmemory for the link.)
Some music written in odd meters sounds like an intellectual exercise rather than a beautiful tune. The music of Dave Brubeck is a notable exception. In addition to Take Five, he composed other popular music in odd meters, such as Unsquare Dance written in 7/4. (Listen to a sample of Unsquare Dance here.) Another song in 5/4 I enjoy is “How Deep the Father’s Love for Us” recorded by Sarah Sadler.
Tradition dictates that each Carnival of Mathematics begin with a riff about the number of the carnival. Since this is the 50th carnival, and the Roman numeral representation of 50 is L, I’ll start with a short riff on a few uses of “L” in math.
In tiling, “L” is for L-polyominos, like that look like the letter “L.”
In analysis, “L” is for Lp spaces, named after Henri Lebesgue. These are spaces of functions whose pth powers are Lebesgue-integrable.
In statistics, “L” is for L-estimators. These are robust estimators formed by taking linear combinations of order statistics. I suppose in this case, “L” is for “linear.” The median is the simplest example of an L-estimator. Another simple L-estimator is John Tukey’s trimean.
In number theory, “L” is for L-functions. First, there were Dirichlet’s L-functions. Then came, in order of increasing generality and abstraction, Hecke’s L-functions, Artin’s L-functions, and Weil’s L-functions. I don’t know where the “L” came from in this case. I suppose Dirichlet arbitrarily chose it and his successors followed his convention.
And now, on to recent math posts from around the web.
Ξ from 360 presents Hyperbolic Light, explaining why a lamp casts a hyperbolic pattern of light on a wall.
Richard Elwes classifies polyhedra in Passing Platonic Solids. He starts with the most restrictive definition of regularity, which gives the five platonic solids. He then discusses which solids are possible as each of the criteria are removed.
Fëanor from JOST A MON presents Puzzling Math in which he traces the history of the St. Ives problem, going back four thousand years.
Given a square matrix A and a particular sparsity pattern, how can we find if it is possible to factorize A into N square factor matrices (where N is finite) whose sparsity pattern is the specified one?
If such factorization is possible, how can we compute each factor, and how can we find the value of N?
IronPython opens up the world of .NET to Python programmers. It’s not as good yet at opening up the world of Python to .NET programmers.
It is easy to write .NET applications in IronPython. I typed in some sample code within a few minutes of installing IronPython and made a very simple Windows application. But I was also interested in going the other way around. I was hoping to use IronPython to expose Python library functionality (specifically SciPy) to C#. This may be possible, but it’s swimming upstream.
There are two issues. First, calling Python from C# is more complicated than I’d expected. In hindsight it makes sense that it should be easier to call statically-typed languages from dynamically-typed languages than the other way around. I wouldn’t be surprised if IronRuby has an analogous problem. Second, even if you’re only using IronPython, not calling it from another language, there are problems calling some Python modules.
I asked a question about SciPy and IronPython on StackOverflow and got two excellent answers. First, “NXC” explained that modules written in pure Python will work with IronPython, but modules written in C will not work directly.
Anything with components written in C (for example NumPy, which is a component of SciPy) will not work on IronPython as the external language interface works differently. Any C language component will probably not work unless it has been explicitly ported to work with IronPython.
That’s disappointing, but it makes sense.
Second, “wilberforce” pointed out an open source project, Ironclad, that might fill in the gap.
Some of my workmates are working on Ironclad, a project that will make extension modules for CPython work in IronPython. It’s still in development, but parts of numpy, scipy and some other modules already work. You should try it out to see whether the parts of scipy you need are supported.
Clay Shirky gave a thought-provoking presentation “It’s Not Information Overload. It’s Filter Failure.” He argues that information overload is not new. Ever since Gutenberg most people have had access to more information than they could handle. But until recently there were effective mechanisms for filtering this information: social norms, slow communication, etc. The solution is to create new filters. Here are a couple quotes from near the end of his presentation.
We’ve had information overload in some form or another since the 1500’s. What’s changing now is the filters used for most of that 500 year period are breaking. And designing new filters doesn’t mean simply updating the old filters. They’re broken for structural reasons, not surface reasons.
When you feel yourself getting too much information, I think the discipline is not to say to yourself “What’s happened to the information?” but rather “What was I relying on before that stopped functioning?”
Data centers consume 1.5% of the electricity produced in the United States and the percentage is increasing. What can be done to make data centers more energy efficient?
According to Ken Brill, 30% of servers could simply be turned off. These servers no longer do anything useful, but nobody was responsible for decommissioning them. (Along the same lines, Brill noted that while preparing for Y2K, companies discovered that half the software they owned didn’t need to be fixed simply because it wasn’t being used.)
After turning off unused equipment, the next thing to do is use more efficient power supplies. Brill said that these power supplies would pay for themselves quickly but are not commonly used.
Here’s an elegant little theorem applied in statistics but useful more generally. Suppose you have a density function f(x) with one hump. Suppose a and b are two points on opposite sides of the hump with f(a) = f(b). Then [a, b] is the shortest interval with its mass. That is, any other interval of length b–a will have less mass than the interval [a, b]. (Here the “mass” of an interval is just the integral of f(x) over that interval.)
Suppose we want to find the shortest interval that has a given mass k. Start by imagining a horizontal line sitting on top of the graph of f(x).
Now lower this horizontal line so that it intersects the graph in two places.
Draw vertical lines down from these two points of intersection to find their x-coordinates.
In this example, the two x-coordinates are about 1.30 and 5.77. So the interval [1.30, 5.77] is the shortest interval with its mass. In other words, no other interval of length 4.47 can contain more mass than this interval does.
We can find the shortest interval of mass k by lowering this horizontal line until the interval it defines has mass k. The lower the horizontal line, the greater the mass. So for any given mass less than the total mass f(x) assigns, there is a unique height of the horizontal line that defines an interval with that mass.
This procedure could be used to find the shortest confidence interval or the shortest Bayesian credible interval. In that case the “mass” is probability, and the task is to find the shortest interval containing a specified probability. The theorem says that the shortest confidence interval or credible interval has equal probability density at each end of the interval.
A proof of this theorem is given in Statistical Inference, chapter 9. Technically, f(x) must be unimodal and positive with finite integral. A homework exercise in the same chapter outlines a simpler proof using the additional assumption that f(x) is continuous.
This weekend I discovered Glenda Watson Hyatt’s blog. Because of cerebral palsy, she can only type with her left thumb, hence her nickname “the left thumb blogger.” She writes well. I can only imagine the patience it takes for her to write her posts.
Here is her presentation on accessibility given at AccessCamp on Saturday.
One of my daughters had the following assignment for science. First you boil a couple leaves of red cabbage and pour off the water. In our case the water was inky blue, but the results may vary according to your water chemistry and possibly by your cabbage. Next add a little diluted ammonia to the water and the color will change. Add the right amount of vinegar and it will change back to the original color. Add more vinegar and it will turn a new color. The liquid can turn a wide spectrum of colors. (Which colors? You’ll need to do the experiment to find out!)
Now we’ve got a jug of ammonia left over. What can you do with a jug of ammonia? Any practical uses? Fun uses? Any more science demonstrations?
I just ran across a quote from Aristotle that seemed right in line with the quotes from John Tukey I posted the other day.
It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits.
I think Tukey and Aristotle may have gotten along well.
I believe Tukey said “There is no point in being precise when you don’t know what you’re talking about.” I’m going from memory, and that quote may not be verbatim. (I did a Google search on “john tukey quotes” and came up with maybe 20 pages that have the exact same three quotes from Tukey. I can’t imagine that 20 independent editors came up with the same three quotes. It’s not as if the man only said three memorable lines. I imagine there’s a great deal of copying going on.)
Here are a couple quotes from Tukey that Aristotle may have appreciated.
Finding the question is often more important than finding the answer.
The test of a good procedure is how well it works, not how well it is understood.
I have mixed feelings about the second quote. Sometimes you do have use things that work well even if you don’t understand why. For example, no one completely understands how anesthesia works. But Tukey was speaking in the context of statistical methods, and there I do see some virtue in using what you understand well even when something you don’t understand appears to work better. Maybe the poorly understood technique on appears to do better on a handful of examples and could fail on your data. But I believe Tukey was referring to techniques that many people have used successfully on a wide variety of problems even though the theoretical foundations haven’t been completely explored.
I’ve just started experimenting with IronPython, Microsoft’s implementation of Python built on .NET. You can download IronPython from CodePlex either as an MSI installer or a zip file of binaries. I installed it from the zip file on one computer and from the MSI on another. I highly recommend the latter.
Installing from the zip file
The CodePlex download page has three files:
My first thought was that I wasn’t interested in compiling IronPython from source, so I’d just download the bin file since it was smaller. I downloaded it, unzipped it, and copied it over to my C:bin directory. (I have a habit of installing languages in C:bin to placate software that assumes paths don’t contain spaces. For example, if you install R in the default C:Program Files location, some add-ons will break.) The typical command line “hello world” program worked just fine. The example from the readme file on how to pop up a window using WinForms worked fine as well. But my attempt to use a standard library by typing import urllib didn’t work. The standard Python modules are not in the search path by default. The tutorial that comes with IronPython explains how to fix this. I added the following two lines to C:binIronPython-2.0.1Libsite.py and then was able to use standard modules like urllib .
I had a non-ferrous version of Python installed already in C:binPython25 so I just reused those files. The tutorial explains where to get the standard library files if IronPython is the first Python you install.
Installing from the MSI file
On a different computer, I downloaded the MSI file and ran it. This was a much nicer experience. The installer has a check box to run NGen on the .NET code in IronPython. I checked this box assuming it would make IronPython run faster in the future.
The standard modules worked immediately with no configuration on my part.The installer created a sophisticated site.py file that builds the path on start-up. Presumably this site.py file will add new modules to my path as I install things in the future.