Programming language support for hexadecimal integers is very common. Support for hexadecimal floating point numbers is not.
It’s a common convention to put 0x
in front of a number to indicate that it is an integer written as an integer literal. For example, 0x12 is not a dozen, but a dozen and a half. The 1 is in the sixteen’s place and the 2 is in the one’s place, so 0x12 represents the number we’d write as 18 in base 10. This notation works in every programming language I can think of at the moment.
But how would you tell a computer that you want the hexadecimal value 20.5, which in base 10 would be 32 5/16?
Hex floating point literal notation
There have been several times when I could have used hexadecimal floating point notation and it wasn’t there. It comes in handy when you’re writing low-level floating point code and want to specify the significand directly without going through any base 10 conversion.
Perl added support for hexadecimal floating point literals in version 22, in 2015. C++ added the same notation in C++ 17. It must not have been a high priority for either language since both took 38 years to add it. Most languages don’t have anything similar.
In Perl and in C++, you can write hexadecimal floats pretty much as you’d expect, except for an exponent of 2 at the end. For example, you might guess that 20.5hex would be written 0x20.5
, and that’s a good start. But it’s actually written 0x20.5p0
, meaning
20.5hex × 20.
You could also write it as 0x2.05p4
, meaning
2.05hex × 24.
Gotchas
There are several things peculiar about this notation.
First, you might expect a power of 16 at the end rather than a power of 2 since we’re thinking in base 16.
Second, the power of 2 isn’t optional. When you don’t want to multiply by a power of 2, you have to specify the exponent on 2 is 0, as in 0x20.5p0
.
Third, the exponent on 2 is written in base 10, not in hex. For example, 0x1p10
represents 1024ten because the “10” is base 10, not hexadecimal or binary.
So in the space of a few characters, you need to think in base 16, base 2, and base 10. Hex, binary, and decimal, all in one tiny package!
Code examples
Here are some code examples for printing
π = 3.243f6a8885a3…hex.
The following Perl code prints 3.14159265358979 three times.
$x = 0x3.243f6a8885a3p0; $y = 0x0.3243f6a8885a3p4; $z = 0x32.43f6a8885a3p-4; print "$x\n"; print "$y\n"; print "$z\n";
The analogous C++ code prints 3.14159 three times. (The default precision for cout
is 6 figures.)
#include int main() { std::cout << 0x3.243f6a8885a3p0 << std::endl; std::cout << 0x0.3243f6a8885a3p4 << std::endl; std::cout << 0x32.43f6a8885a3p-4 << std::endl; return 0; }
In either Perl or C++, 0x1p10
prints as 1024 since the exponent on 2 is a decimal number. And in either language 0x1pA
and 0x1p0xA
are syntax errors.
C has been supporting hexadecimal floating point literals since the last century :-)
The Swift language does support this notation.
Where most languages would use 0x12, Erlang uses 16#12. According to https://erlang.org/doc/reference_manual/data_types.html#number, the base (before the #) can be in the range 2..36. Erlang doesn’t appear to support hexadecimal floating point. But, then again, it doesn’t support IEEE-754 either, so…
> So in the space of a few characters, you need to think in base 16, base 2, and base 10.
If you want to read Japanese, in a single sentence you might need to be able to read both the hiragana and katakana syllabaries (which are kind of like alphabets), plus the kanji characters, and these days there can be the occasional latin letter.
Python doesn’t support float hex literals but it has float.hex() and float.fromhex() that convert to and from strings.
Haskell also supports the same format.
{-# LANGUAGE HexFloatLiterals #-}
— See https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/hex_float_literals.html
main :: IO ()
main = do
print 0x3.243f6a8885a3p0;
print 0x0.3243f6a8885a3p4;
print 0x32.43f6a8885a3p-4;
Maclisp (the MIT project MAC lisp implementation) did not have hex floats (floats were always decimal regardless of any base settings), but it did have Roman numeral integers* around 1975. The bloke who implemented that (Guy Steele) at one point tried to memorize the hex multiplication and addition tables and actually use hex integers in everyday things, and reported that it was way harder than he had expected.
*: Rumor had it that it also had Cuneiform I/O, but that trying to use it would only cause the system to get wedged.
From the title I thought this was going to be an article on IBM’s mainframe hex floating point format (something that nobody else in the universe implements)
https://en.wikipedia.org/wiki/IBM_hexadecimal_floating_point
thank you!
FYI, Java has supported hexadecimal floating-point literals since Java 5 (2004): https://docs.oracle.com/javase/specs/jls/se16/html/jls-3.html#jls-3.10.2