Within one percent

This post looks at some common approximations and determines the range over which they have an error of less than 1 percent. So everywhere in this post “≈” means “with relative error less than 1%.”

Whether 1% relative error is good enough completely depends on context.

Constants

The familiar approximations for π and e are good to within 1%: π ≈ 22/7 and e ≈ 19/7. (OK, the approximation for e isn’t so familiar, but it should be.)

Also, the speed of light is c ≈ 300,000 km/s and the fine structure constant is α ≈ 1/137. See also Koide’s coincidence.

Trig functions

The following hold for angles in radians.

  • sin xx for |x| < 0.244.
  • cos x ≈ 1 – x²/2 for |x| < 0.662.
  • tan xx for |x| < 0.173.

Inverse trig functions

Here again angles are in radians.

  • arcsin xx for |x| < 0.242.
  • arccos x ≈ π/2 – x for |x| < 0.4.
  • arctan xx for |x| < 0.173.

Log

Natural log has the following useful approximation:

  • log(1 + x) ≈ x for -0.0199 < x < 0.0200.

Factorial and gamma

Sterling’s approximation leads to the following.

  • Γ(x) ≈ √(2π/x) (x/e)x for x > 8.2876.
  • n! ≈ √(2π/(n+1)) ((n+1)/e)(n+1) for n ≥ 8.

Commentary

Stirling’s approximation is different from the other approximations because it is an asymptotic approximation: it improves as its argument gets larger.

The rest of the approximations are valid over finite intervals. These intervals are symmetric when the function being approximated is symmetric, that is, even or odd. So, for example, it holds for sine but not for log.

For sine and tangent, and their inverses, the absolute error is O(x3) and the value is O(x), so the relative error is O(x2). [1]

The widest interval is for cosine. That’s because the absolute error and relative error are O(x4). [2]

The narrowest interval for is log(1 + x) due to lack of symmetry. The absolute error is O(x2), the value is O(x), and so the relative error is only O(x).

Verification

Here’s Python code to validate the claims above, assuming the maximum relative error always occurs on the ends, which it does in these examples. We only need to test one side of symmetric approximations to symmetric functions because they have symmetric error.

from numpy import *
from scipy.special import gamma

def sterling_gamma(x):
    return sqrt(2*pi/x)*(x/e)**x

id = lambda x: x

for f, approx, x in [
        (sin, id, 0.244),
        (tan, id, 0.173),
        (arcsin, id, 0.242),
        (arctan, id, 0.173),
        (cos, lambda x: 1 - 0.5*x*x, 0.662),
        (arccos, lambda x: 0.5*pi - x, 0.4),
        (log1p, id, 0.02),
        (log1p, id, -0.0199),
        (gamma, sterling_gamma, 8.2876)
    ]:
    assert( abs((f(x) - approx(x))/f(x)) < 0.01 )

Related posts

[1] Odd functions have only terms with odd exponents in their series expansion around 0. The error near 0 in a truncated series is roughly equal to the first series term not included. That’s why we get third order absolute error from a first order approximation.

[2] Even functions have only even terms in their series expansion, so our second order expansion for cosine has fourth order error. And because cos(0) = 1, the relative error is basically the absolute error near 0.

2 thoughts on “Within one percent

  1. One that I find handy is the hypotenuse of a right-triangle with other sides a and b (where a<b) can be approximated to within 1% by 5(a+b)/7 when 1.04 ≤ b/a ≤1.50

Comments are closed.