This evening I ran across a dialog that suggests that decimal notation is wrong.

It happened when I started learning about decimals in school. I knew then that ten has one zero, a hundred has two, a thousand three, and so on. And then this teacher starts saying that

tenthdoesn’t haveanyzero, ahundredthhas onlyone, a thousandth has only two, and so on. … Only much later did I have enough perspective to put my finger on the problem:The decimal point is always misplaced!

Source: Conics. Emphasis in the original.

The proposed solution is to put the decimal point *above* the units position rather than after it. Then the notation would be symmetric. For example, 1000 and 1/1000 would look like this:

Of course decimal notation isn’t likely to change, but the author makes an interesting point.

Earlier versions of writing decimal fractions prior to the general acceptance of the decimal point used conventions very similar to the one suggested above. What goes around comes around ;)

Maybe he should just pretend that the decimal point is a special, tiny zero. :-)

Or the teacher should have mentioned that numerals other than 0 can be used as place holders. (The 1 in 1234 is also a thousand.) A thousand has three trailing place holders, not zeros. A thousandth has three leading place holders, the first of which is the decimal point.

But 0.1 has one zero!

He’s right, but the “problem” is less bothersome in scientific notation, where the units place is 10^0, the tens are 10^1, and the tenths are 10^{-1}. So, in some sense, we already have notation for people who are bothered by the placement of the decimal point.

Markus: Indeed it does, and it helps to think of that zero as belonging to the fractional part.

Here’s another way to think about this. Decimal notation writes numbers as coefficients of powers of 10. Traditional notation puts a divider between non-negative powers of 10 and negative powers of 10. The units place is the coefficient of the 0th power of 10, so it makes sense to visually balance its association with either positive or negative powers of 10.

I suggest we just randomly put the decimal either after the units position or before it, so that it appears roughly 50% of the time in either place. Sort of how we randomly use “he” or “she” now when we need a generic pronoun. Once enough people are using it, it will become standard and no longer seem awkward.

And 50% of the time when we use the fft, the dc component should be in the last position of the array.

John, As Thony mentions, this was one of the very earliest methods of marking decimals. Simon Stevin, who many credit with the invention of decimal numbers in his “La Thiende” used a system of marking the powers of ten with digits in circles above the places (sort of like we teach place value to younger children). In my notes I wrote, Stevin did not think of these values as fractions, in fact he advertised his book by saying that his book taught men “how to perform with ease, all computations … without fractions.” He seems to have viewed the values as integers, much as we now think of minutes and seconds as integers. Few people consciously think of 3 minutes as 3/60 of an hour in regular computations. This was the view that Stevins took. He did not even use fraction names for the place values, but referred to them as prime, second, third, etc.

There is also an image from his book showing essentially the same approach as the writer. http://pballew.net/arithme9.html#decpoint

It’s a favourite project of mine

a new value of pi to assign.

I would fix it at 3

for it’s simpler, you see,

than 3 point 1 4 1 5 9.

John, I shamelessly piggybacked off this blog with one of my own, and can only justify it by mentioning that I added an image of the page from Stevin’s De Thiende (which I mis-wrote the title of in my last comment).

Oh, And I did mention your blog off in an tiny corner…Thanks for shameless piggybacking. :) Interesting history.

And suddenly, I remember as a kid being confused by this as well… Strange how accustomed we can become to any darn thing.

If you count the number of zeros in 10 and in 0,1 you’ll see that they are just as many.

Natanael: Yes, but 10. and .1 do not. Nor do 10.0 and 0.1 have the same number of 0’s. The notation is inherently asymmetric. However, including an optional zero before decimal points but not after makes the notation less asymmetric.

Pun noted & appreciated!

:)

I was always thinking of it like this and this is what I suggest on how to write them correctly without having to memorize:

If you want to write 1/1000-ths, simply write 1000 in reverse and put a dot after the first 0, just to indicate that this is 1/1000-ths and not 1 with some zeros in the front.

Now that I think more about it, 1/1000 can be thought of as “inverse of 1000” and hence for the convenience of writing and to imply that it is the inverse of the number, we write it as 0001 and to remove the ambiguity it has with 1 we place a decimal point after the first zero. But this does raise the question as to why place the decimal point after the first zero and not before it?!

This problem of ‘at’ versus ‘between’ reminds me of early text editing programs. Later ones put the cursor between two characters, but in earlier ones the cursor was located at one character (because they were limited to a fixed forty-character display where they could render characters in reverse.)

There was also a similar distinction between C++’s Iterators (which pointed at an element in a collection) and Java’s Enumeration (which sat between two elements and returned each element as you called nextElement() on it). Java seems to have decided C++ had it right and has since added the Iterator interface, which is to be preferred for new implementations.

The ones digit raises ten to a power that is neither negative nor positive, which is where the asymmetry starts. To create symmetry you could recognize the unique nature of that exponent and put decimal points before and after the ones digit, so the points represent a transition to signed exponents from the unsigned exponent.

Thus 1000.001 would translate to 100.0.001

Why would one expect decimal notation to be “multiplicatively symmetric”? It is not even close, e.g., the multiplicative inverse of 2. is not .2 .

The role of the decimal point seems to be, rather, to set off the ring of integers.

Fractions are a perfectly good notation that treats multiplicative inverses symmetrically.

LJS: One might reasonably expect 2 and its multiplicative inverse to be symmetric when written in base 2.