A basic idea in numerical integration is that if a method integrates polynomials exactly, it should do well on polynomial-like functions [1]. The higher the degree of polynomial it integrates exactly, the more accurate we expect it will be on functions that behave like polynomials.
The best known example of this is Gaussian quadrature. However, this post shows why for some applications you might want to use Lobatto quadrature instead. Lobatto quadrature is similar in spirit to Gaussian quadrature, but allocates points to solve a slightly different optimization problem.
Gaussian quadrature
If you need to integrate a function over an interval, it matters a great deal where you choose to evaluate the function. The optimal choice, in term of what polynomials you can integrate exactly, is to use Gaussian quadrature. By evaluating the integrand at n optimally chosen points you can integrate polynomials of degree 2n − 1 or less exactly.
So if Gaussian integration is optimal, why would you want to do anything else? The Gaussian integration points are all interior to the interval of integration. In some applications, you have to evaluate your integrand at the end points, and so you want to optimize subject to a constraint: how can you best allocate your integration points subject to the constraint that two of your integration points are the two ends of the interval? The solution is Lobatto quadrature, also called Gauss-Lobatto quadrature.
Lobatto quadrature
By evaluating the integrand at n points, two of which are the end points, Lobatto’s method exactly integrates polynomials of degree 2n-3.
Suppose for whatever reason you already know the value of the integrand evaluated at the end points. You’ve got m more function evaluations to spend. If you use those m points for Gaussian quadrature, you can exactly integrate polynomials of degree
2m − 1
or less. But if you use Lobatto quadrature, your m interior evaluations plus your two known values at the end points give you a total of m+2 function evaluations, and so can integrate polynomials of degree
2(m + 2) − 3 = 2m + 1
or less exactly, two degrees higher than if we had used Gaussian quadrature.
Next, suppose you only know the value of the function you’re integrating at one end point. Say you’ve already integrate f(x) over the interval [1, 2] using Lobatto quadrature, and now you want to integrate over [2, 3]. You already know the value f(2) from your previous integration.
Suppose you have m new function evaluations you can afford, and you don’t know f(3). If you use Lobatto quadrature, f(3) has to come out of your function evaluation budget, so you can afford m − 1 interior integration points. You then know the value of f(x) at m + 1 points: f(2) came for free, you evaluated f(x) at m − 1 interior points and at x = 3. This lets you exactly integrate polynomials of degree
2(m + 1) − 3 = 2m − 1
or less, the same as Gaussian quadrature. But if you then need to integrate over [3, 4], knowing f(3) gives you a head start on the next integration, and so on.
Weights and integration points
You can look up the weights and integration points for Gaussian quadrature and Lobatto quadrature in, for example, Abramowitz and Stegun.
There is a nice symmetry between the two integration methods: Gaussian quadrature uses integration points based on the zeros of Legendre polynomials, and weights that depend on the derivatives of these polynomials. Lobatto quadrature is the other way around: integration points are given by the zeros of derivatives of Legendre polynomials, and the weights involve the Legendre polynomials themselves.
Python example
Here we’ll implement the Gauss and Lobatto rules of order five. Most of the code is data on integration points and weights.
gauss_points = [ -0.906179845938664, -0.538469310105683, 0.0, +0.538469310105683, +0.906179845938664 ] gauss_weights = [ 0.23692688505618908, 0.47862867049936647, 0.5688888888888889, 0.47862867049936647, 0.23692688505618908 ] lobatto_points = [ -1.0, -0.6546536707079771, 0, +0.6546536707079771, +1.0 ] lobatto_weights = [ 0.1, 0.5444444444444444, 0.7111111111111111, 0.5444444444444444, 0.1 ]
The integration points and weights are symmetrical, so you could make the code more compact at the expense of making it a little more complicated. Putting the +
in front of positive integration points is a little unconventional, but it emphasizes the symmetry by making the positive and negative weights align vertically.
Here’s our integration code:
def integrate(f, xs, ws): return sum(f(xs[i])*ws[i] for i in range(5))
where we pass it the function to integrate and either Gauss data or Lobatto data.
The following verifies that with 5 integration points, Gauss should be able to exactly integrate a 9th order polynomial, and Lobatto should be able to integrate a 7th order polynomial.
print( integrate(lambda x: x**9 + 1, gauss_points, gauss_weights) ) print( integrate(lambda x: x**7 + 1, lobatto_points, lobatto_weights) )
Both print 2.0 as expected. The integral of an odd function over [−1, 1] is zero, and the integral of 1 over the same interval is 2.
Now let’s use both to integrate cosine over [−1, 1].
print( integrate(cos, gauss_points, gauss_weights) ) print( integrate(cos, lobatto_points, lobatto_weights) )
The exact integral is 2 sin(1). Here are the results.
Exact: 1.682941969 Gauss: 1.682941970 Lobatto: 1.682942320
So Gauss is correct to 8, almost 9, decimal places, and Lobatto is correct to 5 decimal places.
Next, let’s hard code a 3rd order Gauss rule for comparison.
def gauss3(f): k = 0.6**0.5 s = f(-k)*5 + f(0)*8 + f(k)*5 return s/9
We can verify that it integrates fifth order polynomials exactly:
print(gauss3(lambda x: x**5 + 1))
and we can use it to integrate cosine:
print(gauss3(cos))
This returns 1.68300, a little less accurate then the Lobatto rule above, illustrating that typically Lobatto will be more accurate than Gauss with the same number of function evaluations interior to the interval of integration.
More on numerical integration
- Orthogonal polynomials and Gaussian quadrature [pdf]
- Double exponential integration
- Numerically integrating periodic functions
[1] Polynomials can’t have horizontal asymptotes, for example, and so we should not be surprised that a method that integrates high order polynomials exactly could still do poorly on, say, a normal probability density.
By evaluating the integrand at n points, two of which are the end points, Lobatto’s method exactly integrates polynomials of degree 2n-1. (2n-3??)
No, it doesn’t work like that. What’s special about Gaussian quadrature is not just how many points it uses, but which points it uses. By evaluating the integrand at n specially chosen points, the roots of the nth Legendre polynomial, it is able to integrate polynomials of order 2n – 1 exactly.
When Lobatto’s method evaluates an integrand at the same number of points as Gauss’s method, it is not able to integrate polynomials of as high an order exactly. For example,
prints 0.2222 and 0.2367, the former being exact. With five integration points, Gauss can integrate an 8th degree polynomial exactly, but Lobatto cannot.
Expr is pointing out that you have a typo in the expression at the end of the first paragraph of the “Lobatto quadrature” section.
Is there adaptive version of Gauss-Lobatto quadrature?
@Dr. Drang. Thanks for helping me understand.
@expr: Sorry I misunderstood your comment.