The two latest blog post have elaborated on the fact that when a function satisfies a second order differential equation, once you’ve evaluated the function and its derivative you get the second derivative for free, or at least for cheap.
Sometimes functions are so closely related that software libraries bundle them together. For example, in Python’s scientific computing library SciPy there is no function for returning just one Jacobi elliptic function. Because it takes about as much work to compute one of the functions sn, cn, dn, and ph as it does to compute all of them, SciPy provides a function ellipj
that returns all four at once. If you only want to compute sn, call ellipj
and discard all but the first of the four values returned. No loss. But if you need two or more of the functions, the bundling saves compute time.
Jacobi functions
I haven’t looked at how SciPy computes Jacobi functions, but I imagine the code to compute the various functions overlaps so much that the addition cost of computing all the functions is minimal if you’re going to compute one of them. Jacobi functions are related to theta functions, and theta functions can be computed very efficiently.
I wrote about the interconnections between Jacobi functions in an earlier post. The three functions sn, cn, and dn all satisfy a simple system of differential equations
with initial conditions x(0) = 0, y(0) = 1, and z(0) = 1. The image at the top of the post plots (sn(t, cn(t), dn(t)) and is taken from that post. The Jacobi functions are analogous to trig functions and like trig functions they satisfy myriad identities relating the functions to each other.
Normal random values
An example BOGO (buy one, get one free) in probability is generating samples from a normal random variable. The most common approach generates two uniform samples and transforms them into two normal samples. So if you ask the software for one normal random value, it could give you a second one at no extra computational expense. Using this approach, a function to compute a vector of 2n random values could do so in the time it would take to generate 1 random value n times.
Automatic differentiation
A final example that I’ll mention is automatic differentiation. With this technique it is often possible to evaluate a function and its gradient in little more time than it takes to evaluate the function alone. Having an explicit form of the derivative is not only unnecessary, it may be less efficient than letting automatic differentiation compute the derivative.
Cautionary example
On the flip side, it sometimes seems that you can buy one and get one free when you can’t or shouldn’t. For example, software libraries often have routines to evaluate the error function erf(x) and its complement erfc(x) = 1 – erf(x). In theory you can calculate one from the other, and often in practice too. But if erf(x) is close to 1, 1 – erf(x) might evaluate to zero in floating point arithmetic even though erfc(x) is not zero. For more examples along these lines, see Math functions that seem unnecessary.