# Extended floating point precision in R and C

The GNU MPFR library is a C library for extended precision floating point calculations. The name stands for Multiple Precision Floating-point Reliable. The library has an R wrapper Rmpfr that is more convenient for interactive use. There are also wrappers for other languages.

It takes a long time to install MPFR and its prerequisite GMP, and so I expected it to take a long time to install Rmpfr. But the R library installs quickly, even on a system that doesn’t have MPFR or GMP installed. (I installed GMP and MPFR from source on Linux, but installed Rmpfr on Windows. Presumably the Windows R package included pre-compiled binaries.)

I’ll start by describing the high-level R interface, then go into the C API.

## Rmpfr

You can call the functions in Rmpfr with ordinary numbers. For example, you could calculate ζ(3), the Riemann zeta function evaluated at 3.

```    > zeta(3)
1 'mpfr' number of precision  128   bits
 1.202056903159594285399738161511449990768
```

The default precision is 128 bits, and a numeric argument is interpreted as a 128-bit MPFR object. R doesn’t have a built-in zeta function, so the only available zeta is the one from Rmpfr. If you ask for the cosine of 3, you’ll get ordinary precision.

```    > cos(3)
 -0.9899925
```

But if you explicitly pass cosine a 128-bit MPFR representation of the number 3 you will get cos(3) to 128-bit precision.

```    > cos(mpfr(3, 128))
1 'mpfr' number of precision  128   bits
 -0.9899924966004454572715727947312613023926
```

Of course you don’t have to only use 128-bits. For example, you could find π to 100 decimal places by multiplying the arctangent of 1 by 4.

```    > 100*log(10)/log(2) # number of bits needed for 100 decimals
 332.1928

>  4*atan(mpfr(1,333))
1 'mpfr' number of precision  333   bits
 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706807
```

## MPFR C library

The following C code shows how to compute cos(3) to 128-bit precision and 4 atan(1) to 333 bit precision as above.

```    #include <stdio.h>
#include <gmp.h>
#include <mpfr.h>

int main (void)
{
// All functions require a rounding mode.
// This mode specifies round-to-nearest
mpfr_rnd_t rnd = MPFR_RNDN;

mpfr_t x, y;

// allocate uninitialized memory for x and y as 128-bit numbers
mpfr_init2(x, 128);
mpfr_init2(y, 128);

// Set x to the C double number 3
mpfr_set_d(x, 3, rnd);

// Set y to the cosine of x
mpfr_cos(y, x, rnd);

// Print y to standard out in base 10
printf ("y = ");
mpfr_out_str (stdout, 10, 0, y, rnd);
putchar ('\n');

// Compute pi as 4*atan(1)

// Re-allocate x and y to 333 bits
mpfr_init2(x, 333);
mpfr_init2(y, 333);
mpfr_set_d(x, 1.0, rnd);
mpfr_atan(y, x, rnd);
// Multiply y by 4 and store the result back in y
mpfr_mul_d(y, y, 4, rnd);

printf ("y = ");
mpfr_out_str (stdout, 10, 0, y, rnd);
putchar ('\n');

// Release memory
mpfr_clear(x);
mpfr_clear(y);

return 0;
}
```

If this code is saved in the file `hello_mpfr.c` then you can compile it with

`    gcc hello_mpfr.c -lmpfr -lgmp`

One line above deserves a little more explanation. The second and third arguments to `mpfr_out_str` are the base b and number of figures n to print.

We chose b=10 but you could specify any base value 2 ≤ b ≤ 62.

If n were set to 100 then the output would contain 100 significant figures. When n=0, MPFR will determine the number of digits to output, enough digits that the string representation could be read back in exactly. To understand how many digits that is, see Matula’s theorem in the previous post.

## 2 thoughts on “Extended floating point precision in R and C”

1. There are lots of black-art tricks to getting extended float precision using an IEEE-754 FPU. Things get worse in the SIMD world, because those FPUs generally are simpler than IEEE-754.

I was too lazy to figure this out from first principles. Instead, I’d do key sample calculations in a convenient BigNum (e.g., in bc), then beat on the FP libraries until they gave me what I needed.

BTW, stateful FPUs are the bane of real-time programming. We really need to move all that state into the FP values themselves.