Alphamagic squares in French

In earlier blog posts I give examples of alphamagic squares in English and in Spanish. This post looks at French.

French flag

The Wikipedia article on alphamagic squares, quoting The Universal Book of Mathematics, says

French allows just one 3 × 3 alphamagic square involving numbers up to 200, but a further 255 squares if the size of the entries is increased to 300.

My script did not find a French alphamagic with entries no larger than 200, but it did find 254 squares with entries no larger than 300. (For each of these squares there are 7 more variations you can form by rotation and reflection.)

The French alphamagic square with the smallest maximum entry that I found is the following.

[[3, 204, 102], [202, 103, 4], [104, 2, 203]]

When spelled out in French we get

[[trois, deux cent quatre, cent deux], [deux cent deux, cent trois, quatre], [cent quatre, deux, deux cent trois]]

And when we replace each cell with its number of letters we get the following magic square:

[[ 5, 14, 8], [12, 9, 6], [10, , 13]]

A list of all the French magic squares with maximum element 300 that I found is available here.

Alphamagic square in Spanish

In a previous post I gave an example of an alphamagic square in English. This is a magic square such that if you replace each number with the letter count when spelling out the word, you get another magic square.

Flag of Spain

I wondered whether I could find an alphamagic square in Spanish, so I wrote a script to look. I found two. (Plus there are eight rotations and reflections of each.)

The first is the following:

[[93, 155, 121], [151, 123, 95], [125, 91, 153]]

When spelled out in Spanish the numbers are:

[[noventa y tres, ciento cincuenta y cinco, ciento veintiuno], [ciento cincuenta y uno, ciento veintitrés, noventa y cinco], [ciento veinticinco, noventa y uno, ciento cincuenta y tres]]

And the number of letters in each cell gives:

[[12, 21, 15],[19, 16, 13], [17, 11, 20]]

Here’s a second example:

[[95, 156, 124], [154, 125, 96], [126, 94, 155]]

Spelled out in Spanish:

[[noventa y cinco, ciento cincuenta y seis, ciento veinticuatro], [ciento cincuenta y cuatro, ciento veinticinco, noventa y seis], [ciento veintiséis, noventa y cuatro, ciento cincuenta y cinco]]

Sum of the letters:

[[13, 20, 18], [22, 17, 12], [16, 14, 21]]

If my script is correct, these are the only examples (besides their rotations and reflections) for numbers between 1 and 200.

Next: Alphamagic squares in French

Roughness of amplitude modulated tones

A recent post pointed out that two pure tones that are fairly close in pitch create a rough sound. The roughness increases with the frequency difference, up to a point, then decreases.

This post will look at a roughness in a different setting, amplitude modulation. Several psychoacoustics researchers have suggested that perceived roughness increases as a power of modulation depth, up to a maximum. That is,

R \sim m^p

where the signal is

[1 + m\cos(2\pi f_m t)] \cos(2\pi f_c t)

Some have suggested, based on empirical studies, that p = 2, while other have suggested that p varies as a function of the frequency fc of the carrier wave.

Here is an audio (.wav) file where the the modulation depth varies as a function of time, m = 0.1t where t is time in seconds.

 

In this example the carrier frequency fc is 1000 Hz and the modulation frequency fm is 60 Hz.

Reference: Psychoacoustical Roughness: Implementation of an Optimized Model. P. Daniel and R. Weber. Acoustia 83 (1997) 113–123

Acoustic roughness

When two pure tones are nearly in tune, you hear beats. The perceived pitch is the average of the two pitches, and you hear it fluctuate as many times per second as the difference in frequencies. For example, an A 438 and an A 442 together sound like an A 440 that beats four times per second. (Listen)

As the difference in pitches increases, the combined tone sounds rough and unpleasant. Here are sound files combining two pitches that differ by 16 Hz and 30 Hz.

16 Hz:

30 Hz:

The sound becomes more pleasant as the tones differ more in pitch. Here’s an example of pitches differing by 100 Hz. Now instead of hearing one rough tone, we hear two distinct tones in harmony. The two notes are at frequencies 440-50 Hz and 440+50 Hz, approximately the G and B above middle C.

100 Hz:

If we separate the tones even further, we hear one tone again. Here we separate the tones by 300 Hz. Now instead of hearing harmony, we hear only the lower tone, 440+150 Hz. The upper tone, 440+150 Hz, changes the quality of the lower tone but is barely perceived directly.

300 Hz:

We can make the previous example sound a little better by making the separation a little smaller, 293 Hz. Why? Because now the two tones are an octave apart rather than a little more than an octave. Now we hear the D above middle C.

293 Hz:

Update: Here’s a continuous version of the above examples. The separation of the two pitches at time t is 10t Hz.

Continuous:

Here’s Python code that produced the .wav files. (I’m using Python 3.5.1. There was a comment on an earlier post from someone having trouble using similar code from Python 2.7.)

from scipy.io.wavfile import write
from numpy import arange, pi, sin, int16, iinfo

N = 48000 # sampling rate per second
x = arange(3*N) # 3 seconds of audio

def beats(t, f1, f2):
    return sin(2*pi*f1*t) + sin(2*pi*f2*t)

def to_integer(signal):
    # Take samples in [-1, 1] and scale to 16-bit integers
    m = iinfo(int16).max
    M = max(abs(signal))
    return int16(signal*m/M)

def write_beat_file(center_freq, delta):
    f1 = center_freq - 0.5*delta
    f2 = center_freq + 0.5*delta    
    file_name = "beats_{}Hz_diff.wav".format(delta)
    write(file_name, N, to_integer(beats(x/N, f1, f2)))

write_beat_file(440, 4)
write_beat_file(440, 16)
write_beat_file(440, 30)
write_beat_file(440, 100)
write_beat_file(440, 293)

In my next post on roughness I get a little more quantitative, giving a power law for roughness of an amplitude modulated signal.

Bayesian adaptive clinical trials: promise and pitfalls

This afternoon I’m giving a talk at the Houston INFORMS chapter entitled “Bayesian adaptive clinical trials: promise and pitfalls.”

When I started working in adaptive clinical trials, I was very excited about the potential of such methods. The clinical trial methods most commonly used are very crude, and there’s plenty of room for improvement.

Over time I became concerned about overly complex methods, methods which were good for academic publication but may not be best for patients. Such methods are extremely time-consuming to develop and may not perform as well in practice as simpler methods.

There’s a great deal of opportunity between the extremes, methods that are more sophisticated than the status quo without being unnecessarily complex.

Paying for doughnuts with a phone

At a doughnut shop today, I noticed the people at the head of each line were using their phones, either to pay for an order or to use a coupon. I thought how ridiculous it would sound if I were to go back twenty or thirty years and tell my mother about this.

Me: Some day people are going to pay for doughnuts with a phone.

Mom: You mean like calling up a doughnut shop to place an order? We already do that.

Me: No, they’re going to take their phone into the doughnut shop and pay with it.

Mom: Good grief. Why not just use cash?

Me: Well, they could. But it’ll be easy to use their phones since most people will carry them around all the time anyway.

Mom: People will carry around phones?!

Me: Sorta. More like computers, that can also make phone calls.

Mom: People will carry around computers?!!

old PC

Me: Not really. I was just making that up. But they will drive flying cars.

Mom: OK, I could see that. That’ll be nice.

Cornu’s spiral

Cornu’s spiral is the curve parameterized by

x(t) = C(t) = \int_0^t \cos \left( \frac{\pi}{2} s \right) \, ds \\ y(t) = S(t) = \int_0^t \sin \left( \frac{\pi}{2} s \right) \, ds

where C and S are the Fresnel functions, sometimes called the Fresnel cosine integral and Fresnel sine integral. Here’s a plot of the spiral.

Cornu's spiral

Both Fresnel functions approach ½ as t → ∞ and so the curve slowly spirals toward (½, ½) in the first quadrant. And by symmetry, because both functions are odd, the curve spirals toward (-½, -½) in the third quadrant.

Here’s the Python code used to make the plot.

    from scipy.special import fresnel
    from scipy import linspace
    import matplotlib.pyplot as plt

    t = linspace(-7, 7, 1000)
    y, x = fresnel(t)

    plt.plot(x, y)
    plt.axes().set_aspect("equal")
    plt.show()

The SciPy function fresnel returns both Fresnel functions at the same time. It returns them in the order (S, C) so the code reverses the order of these to match the Cornu curve.

One interesting feature of Cornu’s spiral is that its curvature increases linearly with time. This is easy to verify: because of the fundamental theorem of calculus, the Fresnel functions reduce to sines and cosines when you take derivatives, and you can show that the curvature at time t equals πt.

How fast does the curve spiral toward (½, ½)? Since the curvature at time t is πt, that says that at time t the curve is instantaneously bending like a circle of radius 1/πt. So the radius of the spiral is decreasing like 1/πt.

Cornu’s spiral was actually discovered by Euler. Cornu was an engineer who independently discovered the curve much later. Perhaps because Cornu used the curve in applications, his name is more commonly associated with the curve. At least I’ve more often seen it named after Cornu. This is an example of Stigler’s law that things are usually not named after the first person to discover them.

* * *

For daily posts on analysis, follow @AnalysisFact on Twitter.

AnalysisFact twitter icon

Categorical products

Introduction

There’s an odd sort of partisan spirit to discussions of category theory. They often have the flavor of “Category theory is great!” or “Category theory is a horrible waste of time!” You don’t see this sort of partisanship around, say, probability. Probability theory is what it is, and if you need it, you use it. If you don’t need it, you don’t use it. I think of category theory in a similar way. It’s good for some things and not for others.

In this post I’ll look at just one little piece of category theory, the definition of products, and use it to give a flavor of category theory in general.

Initial objections

The first time I saw category theory’s definition of a product I thought it was a bizarre complication. “The product of A and B is an object P such that for any other object X …”

What is this X doing in our definition? It’s not our product, nor is it one of the things we’re taking the product of.  And why introduce a diagram? Is the product of two mathematical objects a picture?! Why not come out and say what a product is rather than saying what it does? It’s just ordered pairs, right?

Category theory is all about how things behave rather than what they’re made of inside. So you could say that talking about pairs of elements violates the rules of the game. But that raises the question of why play this game at all. What do we get in return for placing such severe and unusual restrictions on ourselves?

The answer is that we get to see broader connections. When we focus on behavior rather than internal composition, we can see that two things behave the same even though they look different inside. Software developers should be familiar with this idea: depend on interface rather than implementation.

Definition

OK, so what is this mysterious definition of product? It’s a mouthful, but we’ll explain why it has to be what it is.

Given two objects A and B in some category, a product of A and B is an object P in that category and a pair of morphisms π1: PA and π2: PB such that for every object X with morphisms f1: X → A and f2: X → B, there exists a unique morphism f that makes the following diagram commute.

Commutative diagram for categorical product

Whew! That’s a lot more work than saying a product is the set of ordered pairs (ab) with a from A and b from B. And it’s not the first definition of product a student should see. However, there are three reasons why it’s worth introducing later:

  1. The ordered pair definition is not complete.
  2. The categorical definition is not as complex as it seems.
  3. The categorical definition makes new connections visible.

Why not ordered pairs

Saying “a product is just ordered pairs” isn’t enough. You have to say how the product relates to the things it’s a product of. In the case of a Cartesian product of sets, the projections are so obvious that it’s hard to realize they’re there, but in general they need to be specified.

Another reason the ordered pair definition isn’t complete is that you need to say how the product is structured. If you’re taking the product of groups, for example, then you have to say how the group operation is defined on these ordered pairs. Or if you’re taking the product of two topological spaces, then you have to say what the topology is on this set whose points are the ordered pairs.

The categorical definition doesn’t tell you how to construct a product, but it tells you how to know when you’ve found something that works. That’s the trade-off: in order to have a theory that exposes wider connections, it can’t be too tied to a specific example. Whether that’s an acceptable trade-off depends on your aim.

To reach further with our theory, we have to look at how things behave rather than how they are constructed. So how does a product behave? It lets you take components: here’s the first component, here’s the second. That’s about it. The categorical definition formalizes this in terms of projections, and it says that this is a universal property of products: anything else that acts like a product factors uniquely through the product.

In general you can’t just say products are ordered pairs. Sometimes products are not pairs, and sometimes pairs are not products. So the ordered pair definition doesn’t always apply. And when it does apply, it keeps us from seeing how products relate to coproducts, limits, and other operations.

When products are not pairs

Here’s an example of a product that’s not a pair. A partially ordered set can be viewed as a category. The elements of the set are the objects of the category, and there is an there is a morphism from a to b if a ≤ b. In that case the product of a and b is their minimum a ∧ b.

When pairs are not products

Here’s an example of a pair that’s not a product. The category of fields does not generally have products. You can form ordered pairs of elements from two fields, but you can’t always define any operation on these pairs that will turn them into a field.

For example, the number of elements in a finite field must be a power of a prime. If you take a field of order 5 and a field of order 7, there are 35 ordered pairs of elements, but there is no field of order 35.

But is it worth it?

The categorical definition of products is difficult to understand. It’s analogous to the δ-ε definition of limits: not the first thing you think of, but the rigorous definition that will generalize well into new situations.

Abstraction should follow experience, not precede it. You need to have multiple examples of products in you mind before you see any advantage to abstracting the idea of a product.

So what does the abstraction buy you? Maybe nothing! It depends on what you’re after. One thing it might do for you is help you to be more consistent. Programming language designers, for example, use category theory to make languages more consistent and easier to think about. A language might want to handle various kinds of products uniformly, even when the products look very different at first. In addition to consistently implementing what they should, category theory might guide designers to not implement what they shouldn’t. For example, above we said that it doesn’t make sense in general to take the product of two fields.

Category theory also suggests new questions. For example, duality is pervasive through out category theory. For every concept, there’s a co-concept. So once you identify a product in some context, it’s natural to ask what coproducts are, and these tend to be less obvious than products. And going back to consistency, category theory might guide you to handle dual concepts in a dual manner.

Related posts

Beats: amplitude modulation in radios and musical instruments

What do tuning a guitar and tuning a radio have in common? Both are examples of beats or amplitude modulation.

Examples

In an earlier post I wrote about how beats come up in vibrating systems, such as a mass and spring combination or an electric circuit. Here I look at examples from music and radio.

Music

When two musical instruments play nearly the same note, they produce beats. The number of beats per second is the difference in the two frequencies. So if two flutes are playing an A, one playing at 440 Hz and one at 442 Hz, you’ll hear a pitch at 441 Hz that beats two times a second. Here’s a wave file of two pure sine waves at 440 Hz and 442 Hz.

As the players come closer to being in tune, the beats slow down. Sometimes you don’t have two instruments but two strings on the same instrument. Guitarists listen for beats to tell when two strings are playing the same note with the same pitch.

AM radio

The same principle applies to AM radio. A message is transmitted by multiplying a carrier signal by the content you want to broadcast. The beats are the content. As we’ll see below, in some ways the musical example and the AM radio example are opposites. With tuning, we start with two sources and create beats. With AM radio, we start by creating beats, then see that we’ve created two sources, the sidebands of the signal.

Mathematical explanation

Both examples above relate to the following trig identity:

cos(ab) + cos(a+b) = 2 cos a cos b

And because we’re looking at time-varying signals, slip in a factor of 2πt:

cos(2π(ab)t) + cos(2π(a+b)t) = 2 cos 2πat cos 2πbt

Music

In the case of two pure tones, slightly out of tune, let a = 441 and b = 1. Playing an A 440 and an A 442 at the same time results in an A 441, twice as loud, with the amplitude going up and down like cos 2πt, i.e. oscillating two times a second. (Why two times and not just once? One beat for the maximum and and one for the minimum of cos 2πt.)

It may be hard to hear beats because of complications we’ve glossed over. Musical instruments are never perfectly in phase, but more importantly they’re not pure tones. An oboe, for example, has strong components above the fundamental frequency. I used a flute in this example because although its tone is not simply a sine wave, it’s closer to a sine wave than other instruments, especially when playing higher notes. Also, guitarists often compare the harmonics of two strings. These are purer tones and so it’s easier to hear beats between them.

Radio

For the case of AM radio, read the equation above from right to left. Let a be the frequency of the carrier wave. For example if you’re broadcasting on AM station 700, this means 700 kHz, so a = 700,000. If this station were broadcasting a pure tone at 440 Hz, b would be 440. This would produce sidebands at 700,440 Hz and 699,560 Hz.

AM signal

In practice, however, the carrier is not multiplied by a signal like cos 2πbt but by 1 + m cos 2πbt where |m| < 1 to avoid over-modulation. Without this extra factor of 1 the signal would be 100% modulated; the envelope of the signal would pinch all the way down to zero. By including the factor of 1 and using a modulation index m less than 1, the signal looks more like the image above, with the envelope not pinching all the way down. (Over-modulation occurs when m > 1. Instead of the envelope pinching to zero, the upper and lower parts of the envelop cross.)

Click to learn more about consulting help with signal processing

Related posts:

Creating police siren sounds with frequency modulation

Yesterday I was looking into calculating fluctuation strength and playing around with some examples. Along the way I discovered how to create files that sound like police sirens. These are sounds with high fluctuation strength.

police car lights

The Python code below starts with a carrier wave at fc = 1500 Hz. Not surprisingly, this frequency is near where hearing is most sensitive. Then this signal is modulated with a signal with frequency fm. This frequency determines the frequency of the fluctuations.

The slower example produced by the code below sounds like a police siren. The faster example makes me think more of an ambulance or fire truck. Next time I hear an emergency vehicle I’ll pay more attention.

If you use a larger value of the modulation index β and a smaller value of the modulation frequency fm you can make a sound like someone tuning a radio, which is no coincidence.

Here are the output audio files in .wav format:

slow.wav

fast.wav

from scipy.io.wavfile import write
from numpy import arange, pi, sin, int16

def f(t, f_c, f_m, beta):
    # t    = time
    # f_c  = carrier frequency
    # f_m  = modulation frequency
    # beta = modulation index
    return sin(2*pi*f_c*t - beta*sin(2*f_m*pi*t))

def to_integer(signal):
    # Take samples in [-1, 1] and scale to 16-bit integers,
    # values between -2^15 and 2^15 - 1.
    return int16(signal*(2**15 - 1))

N = 48000 # samples per second
x = arange(3*N) # three seconds of audio

data = f(x/N, 1500, 2, 100)
write("slow.wav", N, to_integer(data))

data = f(x/N, 1500, 8, 100)
write("fast.wav", N, to_integer(data))

Related posts:

Click to learn more about consulting help with signal processing

Frequentist properties of Bayesian methods

Bayesian methods for designing clinical trials have become more common, and yet these Bayesian designs are almost always evaluated by frequentist criteria. For example, a trial may be designed to stop early 95% of the time under some bad scenario and stop no more than 20% of the time under some good scenario.

These criteria are arbitrary, since the “good” and “bad” scenarios are arbitrary, and because the stopping probability requirements of 95% and 20% are arbitrary. Still, there’s an idea in lurking in the background that in every design there must be something that is shown to happen no more than 5% of the time.

It takes a great deal of effort to design Bayesian methods with desired frequentist properties. It’s an inverse problem, searching for the parameters in a high-dimensional design space, usually via lengthy simulation, that cause the method to satisfy some criteria. Of course frequentist methods satisfy frequentist criteria by design and so meet these criteria with far less effort. It’s rare to see the tables turned, evaluating frequentist methods by Bayesian criteria.

Sometimes the effort to beat frequentist designs at their own game is futile because the frequentist designs are optimal by their own criteria. More often, however, the Bayesian and frequentist methods being compared are not direct competitors but only analogs. The aim in this case is to match the frequentist method’s operating characteristics by one criterion while doing better by a new criterion.

Sometimes a Bayesian method can be shown to have better frequentist operating characteristics than its frequentist counterpart. This puts dogmatic frequentists in the awkward position of admitting that what they see as an unjustified approach to statistics has nevertheless produced a superior product. Some anti-Bayesians are fine with this, happy to have a procedure with better frequentist properties, even though it happened to be discovered via a process they view as illegitimate.

Click to learn more about Bayesian statistics consulting

 

Related postBayesian clinical trials in one zip code

If it were easy …

“If it were easy, someone would have done it.” Maybe not.

Maybe the thing is indeed easy, and has been done before. Then someone was the first to do it. The warning that it had been done before didn’t apply to this person, even though it would apply to the subsequent people with the same idea.

This reminds me of the story of two economists walking down the street. They notice a $20 bill on the sidewalk and the first asks “Aren’t you going to pick it up?” The second replies “No, it’s not really there. If it were, someone would have picked it up by now.”

Sometimes a solution is easy, but nobody has had the audacity to try it. Or maybe circumstances have changed so that something is easy now that hasn’t been before.

Sometimes a solution is easy for you, if not for many others. See how much less credible the opening sentence sounds with for you inserted: “If it were easy for you, someone would have done it.”