Here’s a pitfall in C# that keeps coming up. C# has a constant
double.Epsilon that programmers coming from C naturally assume is the same as C’s
DBL_EPSILON. It’s not. In fact, the former is hundreds of orders of magnitude smaller.
double.Epsilon is the closest floating point number to 0. C’s
DBL_EPSILON is the distance between 1 and the closest floating point number greater than 1. Said another way,
DBL_EPSILON is the smallest positive floating point number
x such that
1 + x != 1, often called “machine epsilon.”
double.Epsilon is on the order of 10^-324 and
DBL_EPSILON is on the order of 10^-16. (These values could potentially change depending on the platform, but they hardly ever do.)
C# has no constant corresponding to
DBL_EPSILON. This is unfortunate, since this constant appears frequently in numerical software. Why? Because it tells you, for example, when to stop adding series.
DBL_EPSILON is on the order of 10^-16, that means that if you add two numbers that differ by more than 16 orders of magnitude, the sum doesn’t change. If you’re summing a decreasing series of numbers, say in order to evaluate a Taylor approximation, you might as well stop once the next term is 16 orders of magnitude smaller than the sum. If you keep going past that point, you’ll burn CPU cycles but you won’t change your answer.
DBL_EPSILON is almost always about 10^-16. But by giving it a name, you avoid having 10^-16 as a mysterious constant throughout code. And if your code should ever move to an environment with different floating point resolution, your code will correctly adjust to the new platform.