Suppose you have function g(x) and you want to find x so that g(x) = 0. However, you don’t have direct access to g(x). Instead you can evaluate f(x) which is g(x) plus random noise. Typically f(x) would be an approximation of g(x) based on Monte Carlo simulation. I have spent countless hours solving problems like this.
One difficulty is that the value of f(x) will change every time you compute it. You might hunt down x and be sure you have it braketed between two values, only to find out you were wrong when you compute f(x) more accurately, say with more Monte Carlo steps.
Assume g(x), i.e. f(x) without noise, is monotone increasing. Assume also that f(x) is expensive to compute. Say each evaluation takes 20 minutes. The amplitude of the noise may vary with x.
Here’s a simple approach to trying to find where f(x) is zero. Use a bisection method, but stop when it looks like you’ve come as close as you can due to noise. If the monotonicity assumption is violated, use that as a signal that you’re within the resolution of the noise.
# Assume a < b, fa = f(a) < 0, fb = f(b) > 0. def noisy_bisect(f, a, b, fa, fb, tolerance): if b-a < tolerance: return (a, b) mid = 0.5*(a+b) fmid = f(mid) if fmid < fa or fmid > fb: # Monotonicity violated. Reached resolution of noise. return (a, b) if fmid < 0: a, fa = mid, fmid else: b, fb = mid, fmid return noisy_bisect(f, a, b, fa, fb, tolerance)
The values of f(a) and f(b) are saved to avoid recalculation since they’re expensive to obtain.
You could experiment with this function as follows.
from scipy.stats import norm from time import sleep # Gaussian distribution w/ mean 0 and standard deviation 0.01 noise = norm(0, 0.01) def f(x): sleep(10) # wait 10 seconds return x**2 - 1 + noise.rvs() a = 0 b = 3 fa = f(a) fb = f(b) print noisy_bisect(f, a, b, fa, fb, 0.0001)
Without noise, the code would iterate until the zero is bracketed in an interval of width 0.0001. However, since the noise in on the order of 0.01, the method will typically stop when the bracket is a on the order of 0.01 wide, maybe a little smaller.
We could make the root-finding software a little faster by using a secant method rather than a bisection method. Instead of taking the midpoint, the secant method draws a line between (a, f(a)) and (b, f(b)) and uses the point where the line crosses the x-axis as the next guess.
A more sophisticated approach would be to increase the accuracy of f(x) over time. If f(x) is evaluated using N steps of a Monte Carlo simulation, the standard deviation of the noise is roughly proportional to 1/√N. So, for example, cutting the noise in half requires multiplying N by 4. You could get a rough bracket on the zero using a small value of N, then increase N as the bracket gets smaller. You could play with this by using a function like the following.
def f(x, N): sleep(N) # wait N seconds noise = norm(0, 0.1/sqrt(N)) return x**2 - 1 + noise.rvs()
You can make the noise as small as you’d like, but at a price. And the price goes up faster than the noise goes down, so it pays to not reduce the noise more than necessary.