How do you evaluate the sine of a large number in floating point arithmetic? What does the result even mean?
Sine of a trillion
Let’s start by finding the sine of a trillion (1012) using floating point arithmetic. There are a couple ways to think about this. The floating point number
t = 1.0e12 can only be represented to 15 or 16 significant decimal figures (to be precise, 53 bits ), and so you could think of it as a representative of the interval of numbers that all have the same floating point representation. Any result that is the sine of a number in that interval should be considered correct.
Another way to look at this is to say that
t can be represented exactly—its binary representation requires 42 bits and we have 53 bits of significand precision available—and so we expect
sin(t) to return the sine of exactly one trillion, correct to full precision.
It turns out that IEEE arithmetic does the latter, computing
sin(1e12) correctly to 16 digits.
Here’s the result in Python
>>> sin(1.0e12) -0.6112387023768895
and verified by Mathematica by computing the value to 20 decimal places
In:= N[Sin[10^12], 20] Out= -0.61123870237688949819
Note that the result above is not what you’d get if you were first to take the remainder when dividing by 2π and then taking the sine.
>>> sin(1.0e12 % (2*pi)) -0.6112078499756778
This makes sense. The result of dividing a trillion by the floating point representation of 2π, 159154943091.89536, is correct to full floating point precision. But most of that precision is in the integer part. The fractional part is only has five digits of precision, and so we should expect the result above to be correct to at most five digits. In fact it’s accurate to four digits.
When your processor computes
sin(1e12) it does not naively take the remainder by 2π and then compute the sine. If it did, you’d get four significant figures rather than 16.
We started out by saying that there were two ways of looking at the problem, and according to the first one, returning only four significant figures would be quite reasonable. If we think of the value
t as a measured quantity, measured to the precision of floating point arithmetic (though hardly anything can be measured that accurately), then four significant figures would be all we should expect. But the people who designed the sine function implementation on your processor did more than they might be expected to do, finding the sine of a trillion to full precision. They do this using a range reduction algorithm that retains far more precision than naively doing a division. (I worked on this problem a long time ago.)
Sine of a googol?
What if we try to take the sine of a ridiculously large number like a googol (10100)? This won’t work because a googol cannot be represented exactly as a floating point number (i.e. as a IEEE 754 double). It’s not too big; floating point numbers can be as large as around 10308. The problem is that a number that big cannot be represented to full precision. But we can represent 2333 exactly, and a googol is between 2332 and 2333. And amazingly, Python’s sine function (or rather the sine function built into the AMD processor on my computer) returns the correct result to full precision.
>>> sin(2**333) 0.9731320373846353
How do we know this is right? I verified it in Mathematica:
In:= Sin[2.0^333] Out= 0.9731320373846353
How do we know Mathematica is right? We can do naive range reduction using extended precision, say 200 decimal places.
In:= p = N[Pi, 200] In:= theta = x - IntegerPart[x/ (2 p)] 2 p Out= 1.8031286178334699559384196689…
and verify that the sine of the range reduced value is correct.
In:= Sin[theta] Out= 0.97313203738463526…
Interval arithmetic interpretation
Because floating point numbers have 53 bits of precision, all real numbers between 256 – 22 and 256 + 22 are represented as the floating point number 256. This is a range of width 8, wider than 2π, and so the sines of the numbers in this range cover the possible values of sine, i.e. [-1, 1]. So if you think of a floating point number as a sample from the set of all real numbers with the same binary representation, every possible value of sine is a valid return value for numbers bigger than 256. But if you think of a floating point number as representing itself exactly, then it makes sense to ask for the sine of numbers like 2333 and larger, up to the limits of floating point representation .
More numerical math posts
- Math functions that seem unnecessary
- NaN, 1.#IND, 1.#INF, and all that
- Anatomy of a floating point number
 An IEEE 754 double has 52 significand bits, but these bits can represent 53 bits of precision because the first bit of the fractional part is always 1, and so it does not need to be represented. See more here.
 The largest exponent of an IEEE double is 1023, and the largest significand is 2 – 2-53 (i.e. all 1’s), so the largest possible value of a double is
(253 – 1)21024-53
and in fact the Python expression
sin((2**53 - 1)*2**(1024-53)) returns the correct value to 16 significant figures.
8 thoughts on “Sine of a googol”
I’ve been bit by a related problem when using ‘bc’. Usually ‘scale’
is the number of digits of precision, but I found that the
range reduction used by “s()” and “c()” seems to be naive, so the
value of sine or cosine for large numbers will bounce around unless
scale is set big enough. It is particularly surprising when e.g. you find that
c(x)^2 + s(x)^2 does not equal one for large x (unless scale is managed right).
Range reduction has close relationships with domain constraints and modulo arithmetic. One of the great things about using ‘bc’ is that it makes you think about the numbers and the algorithm you are using, the why and how, instead of “just using them”.
In prior comments I’ve shared war stories of hammering on algorithms to eliminate the need for an FPU, even to the point of running them in real-time on an 8-bit integer processor that lacked a hardware divide instruction. Many of those algorithms were iteratively developed using ‘bc’ scripts. With lots of analysis of the “bit erosion” caused by each and every operation, and the propagation of errors and uncertainty.
Managing bc’s ‘scale’ and ‘length’ parameters was critically important. Should I do this operation as a multi-byte integer or use a fixed-point representation? A couple passes of ‘bc’ with different ‘scale’ and ‘length’ values would quickly validate whatever conclusion I had reached analytically, with the final decision concerning execution time, not accuracy.
So I got lazy: If you run a naive algorithm solely in the integer domain, the most important piece of information is simply the width of each integer (significant bits/digits) needed to capture the extreme values at each point in the calculation. Focus effort on conserving bits where there are few, and adding transformations or constraints where there are many.
An example: There are “only” about 10^80 atoms in the visible universe (80 decimal digits, or 266 bits). So, any calculation to “choose an atom” needs just 81 decimal digits or 266 bits in the result. That’s not even a “large” problem!
What about the location of that particular atom? Knowing it’s centroid location to the Angstrom (~ 100 picometers) should suffice. The largest knowable radius of the universe is its “comoving distance”, which is now about 46.6 billion light-years (GLY). For simplicity, let’s make it a Cartesian cube with 3 axes that range +/- 46.6 GLY from the origin.
If we do our unit conversion from GLY (~10^16 m/GLY) to Angstrom (~10^10 Angstrom/m), we find there are ~10^26 Angstrom/GLY. If we round the radius up to 50 GLY and double it to make an axis, we have 10^2 GLY/axis, or ~10^28 Angstroms/axis. That’s 28 decimal digits per axis. About 100 bits. It takes three of those numbers to make a coordinate set, or 28 * 3 = 84 decimal digits (~300 bits) total. Still not all that big.
Yeah, we can solve that using ‘bc’ with ‘length’ set to no more than 84 (if we simply count and label the cubic Angstroms) and ‘scale’ set to zero (if we do everything in the integer domain).
The reason this works is that I chose my unit (Angstroms) to be one that greatly exceeds the worst-case required accuracy of the result. Meaning it is an inherently integer unit.
It is important that bc’s ‘scale’ parameter defaults to zero. Why do you think that is?
In footnote 2, one among 1023 and 1024 must be wrong and should be corrected. Also, 2^1024 has 309 digits, so you must choose more than that, say 320 digits of precision, for your Mathematica verification.
Python represents integers exactly, as you can tell if you type 2**333 into Python. You get something different if you type sin(2.0**333).
@MFH, I believe the footnote is correct. The largest significand is almost 2, and the largest exponent is 1023, so the largest representable floating point number is just shy of 2^2014. It may look like there’s an extra factor of 2, but that comes from the significand being almost 2, not almost 1. In detail, (2 – 2^-53)*2^1023 = (2**53 – 1)*2**(1024-53).
In my example where I used pi to 200 digits, I was working with 2^333, which has 101 digits. I wasn’t working with (just less than) 2^1024 at that point. I didn’t include in the post my code for verifying sin((2**53 – 1)*2**(1024-53)).
@John, I agree with the figures in your answer, but the footnote says “(2^53 – 1)*2^(1023-53)” (in the “displayed” equation, the line above sin([correct value]).
@MFH, Thanks. It took me a while to see what you’re saying. This is tedious stuff! I corrected the exponent in the displayed equation.
> And amazingly, Python’s sine function (or rather the sine function built into the AMD processor on my computer) returns the correct result to full precision.
I suspect Python isn’t actually using the sine instruction built in to your processor, but rather a software implementation. Apparently, Intel’s sine instruction doesn’t actually do a great job of range reduction: https://randomascii.wordpress.com/2014/10/09/intel-underestimates-error-bounds-by-1-3-quintillion/
You can see an example of a software implementation of sine in openlibm:
* Sine “kernel”: https://github.com/JuliaMath/openlibm/blob/cca41bc1abd01804afa4862bbd2c79cc9803171a/src/k_sin.c
* Wrapper that does range reduction if necessary: https://github.com/JuliaMath/openlibm/blob/cca41bc1abd01804afa4862bbd2c79cc9803171a/src/s_sin.c
* Range reduction routine: https://github.com/JuliaMath/openlibm/blob/cca41bc1abd01804afa4862bbd2c79cc9803171a/src/e_rem_pio2.c