Visualizing Swedish vowels

A few days ago I wrote a post comparing English and Japanese vowel sounds in a 2D chart. In this post I’d like to do something similar for English and Swedish. As before the data come from [1].

A friend of mine who learned Swedish would joke about how terribly he had to contort his mouth to speak the language. Swedish vowels are objectively difficult for non-native speakers as can be seek in a vowel chart. The vertical axis runs from closed sounds on top to open sounds on the bottom. The horizontal axis runs from front vowels on the left to back vowels on the right.

Swedish vowel sounds

There are a lot of vowel sounds, and many of them are clustered close together. Japanese, by contrast, has only five vowel sounds, and they’re widely spread apart.

Japanese vowel sounds

The vowel charts for Spanish and Hebrew look fairly similar to the chart for Japanese above: five vowels spread out in roughly the same locations.

It wouldn’t matter so much that Swedish has a lot of tightly clustered vowel sounds if your native language has the same sounds, but the following chart shows that English and Swedish vowels are quite different. The red x’s mark English vowel locations and the blue dots mark Swedish vowels.

Swedish and English vowel sounds

[1] Handbook of the International Phonetic Association: A Guide to the Use of the International Phonetic Alphabet. Cambridge University Press, 2021.

Making flags in Unicode

I recently found out [1] that the Unicode sequences for flag emoji are created by taking the two-letter country abbreviation (ISO 3166-1 alpha-2) and replacing both letters with their counterparts in the range U+1F1E6 through U+1F1FF.

For example, the abbreviation for Canada is CA, and the characters 🇨 (U+1F1e8) and 🇦 (U+1F!E6) together create 🇨🇦.

boxed C plus boxed A = Canadian flag

This is illustrated by the following Python code.

    import iso3166

    def flag_emoji(name):
        alpha = iso3166.countries.get(name).alpha2
        box = lambda ch: chr( ord(ch) + 0x1f1a5 )
        return box(alpha[0]) + box(alpha[1])

The name we give to flag_emoji need not be the full country name, like Canada. It can be anything that iso3166.countries.get supports, which also includes two-letter abbreviations like CA, three-letter abbreviations like CAN, or ISO 3166 numeric codes like 124.

We can use the following code to print a collage of flags:

    def print_all_flags():
        for i, c in enumerate( iso3166.countries ):
            print(flag_emoji(c.name), end="")
            if i%25 == 24: print()

10 by 25 array of flags

Related posts

[1] I learned this from watching Dylan Beattie’s talk Plain Text on YouTube.

Visualizing English and Japanese vowels

Vowel sounds can be visualized in a two-dimensional space according to tongue position. The vertical axis is runs from open down to closed, and the horizontal runs from front to back. See a linguistics textbook for far more detail.

English has five vowel letters, but a lot more than five vowel sounds. Scholars argue about how many vowel sounds English and other languages have because there’s room for disagreement on how much two sounds can differ and still be considered variations on the same sound. The IPA Handbook [1] lists 11 vowel sounds in American English, not counting diphthongs.

When I wrote about Japanese hiragana and katakana recently, I showed how the letters are arranged into a grid with one side labeled with English vowel letters. Is that justified? Does Japanese really have just five vowel sounds, and are they similar to five English vowels? Essentially yes. This post will show how English and Japanese vowel sounds compare according to [1].

Here are versions of the vowel charts for the two languages that I made using Python’s matplotlib.

First English:

Then Japanese:

And now the two combined on one plot:

Four out of the five Japanese vowels have a near equivalent in English. The exception is the Japanese vowel with IPA symbol ‘a’, which is midway between the English vowels with symbols æ (U+0230) and ɑ (U+0251), somewhere between the a in had and the a in father.

Update: See the comments for a acoustic phonetician’s input regarding frequency analysis.

Update: Here is a similar post for Swedish. Swedish is interesting because it has a lot of vowel sounds, and the sounds are in tight clusters.

Analogy with KL divergence

The differences between English and Japanese vowels are asymmetric: an English speaker will find it easier to learn Japanese vowels than a Japanese speaker would find it to learn English vowels. This reminiscent of the Kullback-Leibler divergence in probability and statistics.

KL-divergence is a divergence and not a distance, even though it is often called a distance, because it’s not symmetric. The KL-divergence between two random variables X and Y, written KL(X || Y), is the average surprise in seeing Y when you expected X. If you expect English vowel sounds and hear Japanese vowel sounds you’re not as surprised as if you expect Japanese vowel sounds and hear English. The English student of Japanese hears familiar sounds shifted a bit, but the Japanese student of English hears new sounds.

Related posts

[1] Handbook of the International Phonetic Association: A Guide to the Use of the International Phonetic Alphabet. Cambridge University Press, 2021.

Katakana, Hiragana, and Unicode

I figured out something that I wasn’t able to find by searching, so I’m posting it here in case other people have the same question and the same difficulty finding an answer.

I’m sure other people have written about this, but I couldn’t find it. Maybe lots of people have written about this in Japanese but not many in English.

Japanese kana consists of two syllabaries, hiragana and katakana, that are like phonetic alphabets. Each has 46 basic characters, and each corresponds to a block of 96 Unicode characters. I had two simple questions:

  1. How do the 46 characters map into the 90 characters?
  2. Do they map the same way for both hiragana and katakana?

Hiragana / katakana correspondence

I’ll start with the second question because it’s easier. Hiragana and katakana are different ways of representing the same sounds, and they correspond one to one. For example, the full name of U+3047 () is


and the full name of its katakana counterpart U+30A7 () is


The only difference as far as Unicode goes is that katakana has three code points whose hiragana counterpart is unused, but these are not part of the basic letters.

The following Python code shows that the names of all the characters are the same except for the name of the system.

    from unicodedata import name

    unused = [0, 151, 152] # not in hiragana

    for i in range(0,63):
        if i in unused:
        h = name(chr(0x3040 + i)) 
        k = name(chr(0x30a0 + i))
        assert(h == k.replace("KATAKANA", "HIRAGANA"))

Mapping 46 into 50 and 96

You’ll see kana written in grid with one side labeled with 5 vowels and the other labeled with 10 consonants called a gojūon (五十音). That’s 50 cells, and in fact gojūon literally means 50 sounds, so how do we get 46? Five cells are empty, and one letter doesn’t fit into the grid. The empty cells are unused or archaic, and the extra character doesn’t fit the grid structure.

In the image below, the table on the left is for hiragana and the table on the right is for katakana. HTML versions of the tables available here.

Left out of each table is in hiragana and in katakana.

So does each set of 46 characters map into its Unicode code block?

Unicode numbers the letters consecutively if you traverse the grid increasing vowels first, then consonants, and adding the straggler at the end. But the reason 46 letters expand into more code points is that each letter can have one, two, or three variations. And there are various miscellaneous other symbols in the Unicode block.

For example, there is a LETTER E as well as the SMALL LETTER E mentioned above. Other variations seem to correspond to voiced and unvoiced versions of a consonant with a phonetic marker added to the voiced version. For example, く is U+304F, HIRAGANA LETTER KU, and ぐ is U+3050, HIRAGANA LETTER GU.

Here is how hiragana maps into Unicode. Each cell should be U+3000 plus the characters show.

         a  i  u  e  o 
        42 44 46 48 4A 
     k  4B 4D 4F 51 53 
     s  55 57 59 5B 5D 
     t  5F 61 64 66 68 
     n  6A 6B 6C 6D 6E 
     h  6F 72 75 78 7B 
     m  7E 7F 80 81 82 
     y  84    86    88 
     r  89 8A 8B 8C 8D 
     w  8F          92 

The corresponding table for katakana is the previous table plus 0x60:

         a  i  u  e  o 
        A2 A4 A6 A8 AA 
     k  AB AD AF B1 B3 
     s  B5 B7 B9 BB BD 
     t  BF C1 C4 C6 C8 
     n  CA CB CC CD CE 
     h  CF D2 D5 D8 DB 
     m  DE DF E0 E1 E2 
     y  E4    E6    E8 
     r  E9 EA EB EC ED 
     w  EF          F2 

In each case, the letter missing from the table is the next consecutive value after the last in the table, i.e. is U+30F3.

Related posts

Dominoes in Unicode

I was spelunking around in Unicode and found that there are assigned characters for representing domino tiles and that the characters are enumerated in a convenient order. Here is the code chart.

There are codes for representing tiles horizontally or vertically. And even though, for example, the 5-3 is the same domino as the 3-5, there are separate characters for representing the orientation of the tile: one for 3 on the left and one for 5 on the left.

When you include orientation like this, a domino becomes essentially a base 7 number: the number of spots on one end is the number of 7s and the number of spots on the other end is the number of 1s. And the order of the characters corresponds to the order as base 7 numbers:

0-0, 0-1, 0-2, …, 1-0, 1-1, 1-2, … 6-6.

The horizontal dominoes start with the double blank at U+1F031 and the vertical dominoes start with U+1F063, a difference of 32 in hexadecimal or 50 in base 10. So you can rotate a domino tile by adding or subtracting 50 to its code point.

The following tiny Python function gives the codepoint for the domino with a spots on the left (or top) and b spots on the right (or bottom).

    def code(a, b, wide):
        cp = 0x1f031 if wide else 0x1f063
        return cp + 7*a + b

We can use this function to print a (3, 5) tile horizontally and a (6, 4) tile vertically.

    print( chr(code(3, 5, True )),
           chr(code(6, 4, False)) )

To my surprise, my computer had the fonts installed to display the results. This isn’t guaranteed for such high Unicode values.

horizontal 3-5 domino and vertical 6-4

Branch cuts for elementary functions

As far as I know, all contemporary math libraries use the same branch cuts when extending elementary functions to the complex plane. It seems that the current conventions date back to Kahan’s paper [1]. I imagine to some extent he codified existing practice, but he also settled some issues, particularly regarding floating point implementation.

I’ve verified that the following branch cuts are used by Mathematica, Common Lisp, and SciPy. If you know of any software that follows other conventions, please let me know in a comment.

The conventional branch cuts are as follows.

  • sqrt: [-∞, 0)
  • log: [-∞, 0]
  • arcsin: [-∞, -1] and [1, ∞]
  • arccos: [-∞, -1] and [1, ∞]
  • arctan: [-∞i, –i] and [i, ∞i]
  • arcsinh: [-∞i, –i] and [i, ∞i]
  • arccosh: [-∞, 1]
  • arctanh: [-∞, -1] and [1, ∞]

Related posts

[1] W. Kahan. Branch Cuts for Complex Elementary Functions or Much Ado About Nothing’s Sign Bit. The State of the Art in Numerical Analysis. Clarendon Preess (1987).

Series for inverse cosine at 1

Suppose you need to estimate the inverse cosine of an argument near 1. There’s a series for that:

\arccos(z) = \sqrt{2} \sqrt{1-z} \left( 1 + \frac{1-z}{12} + \frac{3}{160}(1-z)^3 + \cdots \right)

You can find this series, for example, here.

This comes in handy, for example, when working with the analog of the Pythagorean theorem on a sphere.

You could just use the series and be on your way. But there’s a lot more going on than immediately meets the eye.

Why this shouldn’t be possible

We’re looking at a series for inverse cosine centered at 1, and yet inverse cosine is multivalued in a neighborhood of 1: for an argument slightly less than 1, there are two possible angles that have such a cosine. That matters because in order to have a power series at a point, a function has to be well behaved in a disk around that point in the complex plane. Not only is our function not well behaved, it’s not even well defined until we consider branch cuts. Also, the function arccos(z) isn’t differentiable at 1; the derivative has a singularity at 1.

In a nutshell, we’re trying to expand a function in a series at a point where the function is badly behaved. That sounds impossible, or at least ill advised.

Why this may be possible

The series above is not a series for arccos per se. If we divide both sides of the series by √(2-2z) we see that we actually have a series for


Although arccos is badly behaved at 1, so is √(2-2z), and their ratio is well behaved at 1. In fact, it’s an analytic function and so it has a power series.

Inverse cosine has a series expansion but it does not have a power series. The series at the top is not a power series because it does not consist solely of powers of z. There’s a square root function on the right side as well, and this function is crucial to making things work.


Why would anyone think to find a series this way, i.e. why divide by √(2-2z)?

For small values of z,

\cos(z) \approx 1 - \frac{z^2}{2}

and so

\arccos(z) \approx \sqrt{2 - 2z}

One might hope, correctly it turns out, that by dividing arccos(z) by a good approximation might yield a function nice enough to expand in a power series.

Why it is possible

Let’s plot


to see whether it looks like a reasonable function. There’s usually no middle ground in complex variables: functions are either analytic or badly behaved. Both the numerator and denominator have branch cuts, but hopefully the cuts coincide and the ratio can be extended smoothly across the cuts.

The plot below suggests that is the case.

This plot was produced with

    f[z_] := ArcCos[z] / Sqrt[2 - 2 z]
    ComplexPlot3D[f[z], {z, 0 - I, 2 + I}]

The white streak across the plot is not an accidental artifact of plotting but illustrates something important.

It is not possible to extend arccos(z) to a function that is analytic for all z. You have to exclude some values of z from the domain, i.e. you have to make branch cuts, and Mathematica makes these cuts along the real axis for z ≤ -1 and z ≥ 1.

The square root function also requires a branch cut, and Mathematica chooses that branch cut to be along the negative real axis, with means we have to exclude z ≥ 1. So the branch cuts of our numerator and denominator do coincide. (Inverse cosine has an additional branch cut, but it’s not near 1 so it doesn’t matter for our purposes.)

In summary, the series at the top of the post expands arccos at a point where the function is badly behaved, by dividing it by another function that is badly defined in the same way, making a function that is well behaved.

Related posts

Literate programming to reduce errors

I had some errors in a recent blog post that might have been eliminated if I had programmatically generated the content of the post rather than writing it by hand.

I rewrote the example in this post in using org-mode. My org file looked like this:

    #+begin_src python :session :exports none
    lax_lat =   33.94
    lax_lon = -118.41
    iah_lat =   29.98
    iah_lon =  -95.34

    Suppose we want to find how far a plane would travel 
    between Los Angeles (LAX) and Houston (IAH), 

    #+begin_src python :session :exports none
    a = round(90 - lax_lat, 2)
    b = round(90 - iah_lat, 2)

    /a/ = src_python[:session :results raw]{f"90° – {lax_lat}° = {a}°"}


    /b/ = src_python[:session :results raw]{f"90° – {iah_lat}° = {b}°"}


Here are some observations about the experience.

First of all, writing the post in org-mode was more work than writing it directly, pasting in computed values by hand, but presumably less error-prone. It would also be easier to update. If, for example, I realized that I had the wrong coordinates for one of the airports, I could update the coordinates in one location and everything else would be updated when I regenerated the page.

I don’t think this was the best application of org-mode. It’s easier to use org-mode like a notebook, in which case you’re not trying to hide the fact that you’re mixing code and prose. I wanted to insert computed values into the text without calling attention to the fact that the values were computed. This is fine when you mostly have a text document and you only want to insert a few computed values. When you’re doing more computing it becomes tedious to repeatedly write

    src_python[:session :results raw]{...}

to insert values. It might have been easier in this case to simply write a Python program that printed out the HTML source of the example.

There are a couple advantages to org-mode that weren’t relevant here. One is that the same org-mode file can be exported to multiple formats: HTML, LaTeX, ODT, etc. Here, however, I was only interested in exporting to HTML.

Another advantage of org-mode is the ability to mix multiple programming languages. Here I was using Python for everything, but org-mode will let you mix dozens of languages. You could compute one result in R, another result in Haskell, pass both results as arguments into some Java code, etc. You could also include data tables and diagrams in your org-mode file with your prose and code.

Literate programming

In general, keeping code and documentation together reduces errors. Literate programming may be more or less work, depending on the problem, but it reduces certain kinds of errors.

The example above is sorta bargain-basement literate programming. The code being developed was very simple, and not of interest beyond the article it was used in. Literate programming really shines when used to develop complex code, as in the book Physically Based Rendering. (Update: The third edition of this book is available online.)

When code requires a lot of explanation, literate programming can be very helpful. I did a project in psychoacoustics with literate programming a few years ago that would have been hard to do any other way. The project required a lot of reverse engineering and research. A line of code could easily require a couple paragraphs of explanation. Literate programming made the code development much easier because we could develop the documentation and code together, explaining things in the order most suited to the human reader, not to the compiler.

Computing VIN checksums

I’ve had to work a little with VIN numbers lately, and so I looked back at a post I wrote on the subject three years ago. That post goes into the details of Vehicle Identification Numbers and the quirky algorithm used to compute the check sum.

This post captures the algorithm in Python code. See the earlier post for documentation.

    import re
    def char_to_num(ch):
        "Assumes all characters are digits or capital letters."
        n = ord(ch)

        if n <= ord('9'): # digits 
            return n - ord('0')

        if n < ord('I'): # A-I
            return n - ord('A') + 1

        if n <= ord('R'): # J-R
            return n - ord('J') + 1    

        return n - ord('S') + 2 # S-Z
    def checksum(vin):
        w = [8, 7, 6, 5, 4, 3, 2, 10, 0, 9, 8, 7, 6, 5, 4, 3, 2]
        t = 0
        for i, c in enumerate(vin):
            t += char_to_num(c)*w[i]
        t %= 11
        check = 'X' if t == 10 else str(t)
        return check

    def validate(vin):
        vinregex = "^[A-HJ-NPR-Z0-9]{17}$"
        r = re.match(vinregex, vin)
        return r and checksum(vin) == vin[8]

This code assumes the VIN number is given as ASCII or Unicode text. In particular, digits come before letters, and the numeric values of letters increase with alphabetical order.

The code could seem circular: the input is the full VIN, including the checksum. But the checksum goes in the 9th position, which has weight 0. So the checksum doesn't contribute to its own calculation.

Update I added a regular expression to check that the VIN contains only valid characters.