The best polynomial approximation, in the sense of minimizing the maximum error, can be found by the Remez algorithm. I expected Mathematica to have a function implementing this algorithm, but apparently it does not have one. (But see update below.)

It has a function named `MiniMaxApproximation`

which sounds like Remez algorithm, and it’s close, but it’s not it.

To use this function you first have to load the `FunctionApproximations`

package.

<< FunctionApproximations`

Then we can use it, for example, to find a polynomial approximation to *e*^{x} on the interval [−1, 1].

MiniMaxApproximation[Exp[x], {x, {-1, 1}, 5, 0}]

This returns the polynomial

1.00003 + 0.999837 x + 0.499342 x^2 + 0.167274 x^3 + 0.0436463 x^4 + 0.00804051 x^5

And if we plot the error, the difference between *e*^{x} and this polynomial, we see that we get a good fit.

But we know this isn’t optimal because there is a theorem that says the optimal approximation has equal ripple error. That is, the absolute value of the error at all its extrema should be the same. In the graph above, the error is quite a bit larger on the right end than on the left end.

Still, the error is not much larger than the smallest possible using 5th degree polynomials. And the error is about 10x smaller than using a Taylor series approximation.

Plot[Exp[x] - (1 + x + x^2/2 + x^3/6 + x^4/24 + x^5/120), {x, -1, 1}]

**Update**: Jason Merrill pointed out in a comment what I was missing. Turns out `MiniMaxApproximation`

finds an approximation that minimizes *relative* error. Since *e*^{x} doesn’t change that much over [−1, 1], the absolute error and relative error aren’t radically different. There is no option to minimize absolute error.

When you look at the approximation error divided by *e*^{x} you get the ripples you’d expect.

See the next post for a way to construct near optimal polynomial approximations using Chebyshev approximation.

According to the documentation [1], MiniMaxApproximation minimizes relative error, whereas classic Remez minimizes absolute error. If you’re trying to come up with an algorithm to use as a floating point approximation, it’s probably more common to want to minimize relative error than absolute error.

I thought I remembered there being a way to change the error metric, but I can’t find it in these docs.

[1] https://reference.wolfram.com/language/FunctionApproximations/ref/MiniMaxApproximation.html

It looks like GeneralMiniMaxApproximation may be the way to control the error metric.

https://reference.wolfram.com/language/FunctionApproximations/ref/GeneralMiniMaxApproximation.html

Nick Trefethen’s book Approximation Theory and Approximation Practice has a chapter discussing why you might choose to Chebyshev over Remez. The short answer is that it is indeed much more work for a very small gain. It would be strange if Mathematica didn’t have a method for it though.

I think there is a typo in the 5th term of the Taylor’s expansion. There should be a 24 in the denominator instead of 25.

Thanks. I fixed the error and revised the graph.