Chebyshev polynomials as distorted cosines

Forman Acton’s book Numerical Methods that Work describes Chebyschev polynomials as

cosine curves with a somewhat disturbed horizontal scale, but the vertical scale has not been touched.

The relation between Chebyshev polynomials and cosines is

Tn(cos θ) = cos(nθ).

Some sources take this as the definition of Chebyshev polynomials. Other sources define the polynomials differently and prove this equation as a theorem.

It follows that if we let x = cos θ then

Tn(x) = cos(n arccos x).

Now sin x = cos(π/2 − x) and for small x, sin x ≈ x. This means

arccos(x) ≈ π/2 − x

for x near 0, and so we should expect the approximation

Tn(x) ≈ cos(n(π/2 − x)).

to be accurate near the middle of the interval [−1, 1] though not at the ends. A couple plots show that this is the case.

Mote Chebyshev posts

Interpolating the gamma function

Suppose you wanted to approximate Γ(10.3). You know it’s somewhere between Γ(10) = 9! and Γ(11) = 10!, and linear interpolation would give you

Γ(10.3) ≈ 0.7 × 9! + 0.3 × 10! = 1342656.

But the exact value is closer to 716430.69, and so our estimate is 53% too high. Not a very good approximation.

Now let’s try again, applying linear interpolation to the log of the gamma function. Our approximation is

log Γ(10.3) ≈ 0.7 × log 9! + 0.3 × log 10! = 13.4926

while the actual value is 13.4820, an error of about 0.08%. If we take exponentials to get an approximation of Γ(10.3), not log Γ(10.3), the error is larger, about 1%, but still much better than 53% error.

The gamma function grows very quickly, and so the log gamma function is usually easier to work with.

As a bonus, the Bohr–Mollerup theorem says that log gamma is a convex function. This tells us that not only does linear interpolation give an approximation, it gives us an upper bound.

The Bohr–Mollerup theorem essentially says that the gamma function is the only function that extends factorial from a function on the integers to a log-convex function on the real numbers. This isn’t quite true since it’s actually &Gamma(x + 1) that extends factorial. Showing the gamma function is unique is the hard part. In the preceding paragraph we used the easy direction of the theorem, saying that gamma is log-convex.

Related posts

Evaluating a class of infinite sums in closed form

The other day I ran across the surprising identity

\sum_{n=1}^\infty \frac{n^3}{2^n} = 26

and wondered how many sums of this form can be evaluated in closed form like this. Quite a few it turns out.

Sums of the form

\sum_{n=1}^\infty \frac{n^k}{c^n}

evaluate to a rational number when k is a non-negative integer and c is a rational number with |c| > 1. Furthermore, there is an algorithm for finding the value of the sum.

The sums can be evaluated using the polylogarithm function Lis(z) defined as

\text{Li}_s(z) = \sum_{n=1}^\infty \frac{z^n}{n^s}

using the identity

\sum_{n=1}^\infty \frac{n^k}{c^n} = \text{Li}{-k}\left(\frac{1}{c}\right)

We then need to have a way to evaluate Lis(z). This cannot be done in closed form in general, but it can be done when s is a negative integer as above. To evaluate Lik(z) we need to know two things. First,

Li_1(z) = -\log(1-z)

and second,

\text{Li}_{s-1}(z) = z \frac{d}{dz} \text{Li}_s(z)

Now Li0(z) is a rational function of z, namely z/(1 − z). The derivative of a rational function is a rational function, and multiplying a rational function of z by z produces another rational function, so Lis(z) is a rational function of z whenever s is a non-positive integer.

Assuming the results cited above, we can prove the identity

\sum_{n=1}^\infty \frac{n^3}{2^n} = 26

stated at the top of the post.The sum equals Li−3(1/2), and

\text{Li}_{-3}(z) = \left(z \frac{d}{dz}\right)^3 \frac{z}{1-z} = \frac{z(1 + 4z + z^2)}{(1-z)^4}

The result comes from plugging in z= 1/2 and getting out 26.

When k and c are positive integers, the sum

\sum_{n=1}^\infty \frac{n^k}{c^n}

is not necessarily an integer, as it is when k = 3 and c = 2, but it is always rational. It looks like the sum is an integer if c= 2; I verified that the sum is an integer for c = 2 and k = 1 through 10 using the PolyLog function in Mathematica.

Update: Here is a proof that the sum is an integer when n = 2. From a comment by Theophylline on Substack.

The sum is occasionally an integer for larger values of c. For example,

\sum_{n=1}^\infty \frac{n^4}{3^n} = 15

and

\sum_{n=1}^\infty \frac{n^8}{3^n} = 17295

Related posts

q-analog of rising powers

The previous post looked at the probability that a random n by n matrix over a finite field of order q is invertible. This works out to be

\prod_{i=1}^n \left(1 - \frac{1}{q^i}\right)

This function of q and n comes up in other contexts as well and has a name that we will get to shortly.

Pochhammer symbols

Leo August Pochhammer (1841–1920) defined the kth rising power of x by

(x)_k = x(x + 1)(x + 2)\cdots(x + k - 1)

Rising and falling powers come up naturally in combinatorics, finite difference analogs of calculus, and in the definition of hypergeometric functions.

q-Pochhammer symbols

The q-analog of the Pochhammer symbol is defined as

(a;q)_n = \prod_{k=0}^{n-1} (1-aq^k)=(1-a)(1-aq)(1-aq^2)\cdots(1-aq^{n-1})

Like the Pochhammer symbol, the q-Pochhammer symbol also comes up in combinatorics.

In general, q-analogs of various concepts are generalizations that reduce to the original concept in some sort of limit as q goes to 1. The relationship between the q-Pochhammer symbol and the Pochhammer symbol is

\lim_{q\to 1} \frac{(q^x;q)_n}{(1-q)^n} = (x)_n
For a simpler introduction to q-analogs, see this post on q-factorial and q-binomial coefficients.

Back to our probability problem

The motivation for this post was to give a name to the function that gives probability a random n by n matrix over a finite field of order q is invertible. In the notation above, this function is (1/q; 1/q)n.

There’s a confusing notational coincidence here. The number of elements in a finite field is usually denoted by q. The q in q-analogs such as the q-Pochhammer symbol has absolute value less than 1. It’s a coincidence that they both use the letter q, and the q in our application of q-Pochhammer symbols is the reciprocal of the q representing the order of a finite field.

I mentioned in the previous post that the probability of the matrix being invertible is a decreasing function of n. This probability decreases to a positive limit, varying with the value of q. This limit is (1/q; 1/q). Here the subscript ∞ denotes that we take the limit in (1/q; 1/q)n as n goes to infinity. There’s no problem here because the infinite product converges.

Mathematica and plotting

The q-Pochhammer symbol (a; q)n is implemented in Mathematica as QPochhammer[a, q, n] and the special case (q; q) is implemented as QPochhammer[q]. We can use the latter to make the following plot.

Plot[QPochhammer[q], {q, -1, 1}]

Recall that the q in our motivating application is the reciprocal of the q in the q-Pochhhammer symbol. This says for large fields, the limiting probability that an n by n matrix is invertible as n increases is near 1, but that for smaller fields the limiting probability is also smaller. For q = 2, the probability is 0.288788.

Plot[QPochhammer[1/q], {q, 2, 100}, PlotRange -> All]

Related posts

The Clausen function

I ran across the Clausen function the other day, and when I saw a plot of the function my first thought was that it looks sorta like a sawtooth wave.

Plot of Clausen function Cl_2

I wondered whether it also sounds like a sawtooth wave, and indeed it does. More on that shortly.

The Clausen function can be defined in terms of its Fourier series:

\text{Cl}_2(x) = \sum_{k=1}^\infty \frac{\sin(kx)}{k^2}

The function commonly known as the Clausen function is one of a family of functions, hence the subscript 2. The Clausen functions for all non-negative integers n are defined by replacing 2 with n on both sides of the defining equation.

The Fourier coefficients decay quadratically, as do those of a triangle wave or sawtooth wave, as discussed here. This implies the function Cl2(x) cannot have a continuous derivative. In fact, the derivative of Cl2(x) is infinite at 0. This follows quickly from the integral representation of the function.

\text{Cl}_2(x)=-\int_0^x\log \left|2\sin\frac{t}{2} \right|\, dt

The fundamental theorem of calculus shows that the derivative

\text{Cl}'_2(x)=-\log \left|2\sin\frac{x}{2} \right|

blows up at 0.

Now suppose we create an audio clip of Cl2(440x). This creates a sound with pitch A 440, but rather than a sinewave it has an unpleasant buzzing sound, much like a sawtooth wave.

clausen2.wav

 

The harshness of the sound is due to the slow decay of the Fourier coefficients; the Fourier coefficients of more pleasant musical sounds decay much faster than quadratically.

Related posts

Computing Γ(z) for complex z with tables

In the previous post I mentioned that the general strategy for computing a mathematical function using tables is to first reduce the function argument to be within the range of the tabulated values, and then to use interpolation to compute the function at values that are not directly tabulated.

The second step is always the same, applying Lagrange interpolation with enough points to get the accuracy you need. But the first step, range reduction, depends on the function being evaluated. And as the previous post concluded, evaluating more advanced functions generally requires more advanced range reduction.

Real argument

For the gamma function, the identity

\Gamma(x + 1) = x\, \Gamma(x)

can be used to reduce the problem of computing Γ(x) for any real x to the problem of computing Γ(x) for x in the interval [1, 2]. If x is large, the identity will have to be applied many times and so this would be a lot of work. However, the larger x is, the more accurate Stirling’s approximation becomes.

Complex argument

Computing Γ(x + iy) is more complex, pardon the pun. We can still use the identity above to reduce the real part x of the argument to lie in the interval [1, 2], but what about the complex part y?

Abramowitz and Stegun, Table 6.7, tabulates the principal branch of log Γ(x + iy) for x from 1 to 2 and for y from 0 to 10, both in increments of 0.1. Generally the logarithm of the gamma function is more useful in computation than the gamma function itself. It is also easier to interpolate, which I imagine is the main reason A&S tabulate it rather than the gamma function per se. A note below Table 6.7 says that linear interpolation gives about 3 significant figures, and eight-point interpolation gives about 8 figures.

By the Schwarz reflection principle,

\Gamma(\bar{z}) = \overline{\Gamma(z)}

and with this we can extend our range on y to [−10, 10].

Large imaginary part

What about larger y? We have two options: the duplication formula and Stirling’s approximation.

The duplication formula

\Gamma(2z) = \frac{1}{\sqrt{2\pi}}2^{2z - \frac{1}{2}} \Gamma(z) \Gamma\left(z + \tfrac{1}{2} \right )

lets us compute Γ(2z) if we can compute Γ(z) and Γ(z + ½).

Stirling’s approximation for Γ(z) is accurate when |z| is large, and |x + iy| is large when |y| is large.

More posts on using tables

Solutions to tan(x) = x

I read something recently that said in passing that the solutions to the equation tan x = x are the zeros of the Bessel function J3/2. That brought two questions to mind. First, where have I seen the equation tan x = x before? And second, why should its solutions be the roots of a Bessel function.

The answer to the first question is that I wrote about the local maxima of the sinc function three years ago. That post shows that the derivative of the sinc function sin(x)/x is zero if and only if x is a fixed point of the tangent function.

As for why that should be connected to zeros a Bessel function, that one’s pretty easy. In general, Bessel functions cannot be expressed in terms of elementary functions. But the Bessel functions whose order is an integer plus ½ can.

For integer n,

J_{n+{\frac{1}{2}}}(x)= (-1)^n \left(\frac{2}{\pi}\right)^{\frac{1}{2}} x^{n+{\frac{1}{2}}} \left(\frac{1}{x}\frac{d}{dx}\right)^n\left(\frac{\sin x}{x}\right)

So when n = 1, we’ve got the derivative of sinc right there in the definition.

Hypergeometric function of a large negative argument

It’s occasionally necessary to evaluate a hypergeometric function at a large negative argument. I was working on a project today that involved evaluating F(a, b; c; z) where z is a large negative number.

The hypergeometric function F(a, b; c; z) is defined by a power series in z whose coefficients are functions of a, b, and c. However, this power series has radius of convergence 1. This means you can’t use the series to evaluate F(a, b; c; z) for z < −1.

It’s important to keep in mind the difference between a function and its power series representation. The former may exist where the latter does not. A simple example is the function f(z) = 1/(1 − z). The power series for this function has radius 1, but the function is defined everywhere except at z = 1.

Although the series defining F(a, b; c; z) is confined to the unit disk, the function itself is not. It can be extended analytically beyond the unit disk, usually with a branch cut along the real axis for z ≥ 1.

It’s good to know that our function can be evaluated for large negative x, but how do we evaluate it?

Linear transformation formulas

Hypergeometric functions satisfy a huge number of identities, the simplest of which are known as the linear transformation formulas even though they are not linear transformations of z. They involve bilinear transformations z, a.k.a. fractional linear transformations, a.k.a. Möbius transformations. [1]

One such transformation is the following, found in A&S 15.3.4 [2].

F(a, b; c; z) = (1-z)^{-a} F\left(a, c-b; c; \frac{z}{z-1} \right)

If z < 1, then 0 < z/(z − 1) < 1, which is inside the radius of convergence. However, as z goes off to −∞, z/(z − 1) approaches 1, and the convergence of the power series will be slow.

A more complicated, but more efficient, formula is A&S 15.3.7, a linear transformation formula relates F at z to two other hypergeometric functions evaluated at 1/z. Now when z is large, 1/z is small, and these series will converge quickly.

\begin{align*} F(a, b; c; z) &= \frac{\Gamma(c) \Gamma(b-a)}{\Gamma(b) \Gamma(c-a)} \,(-z)^{-a\phantom{b}} F\left(a, 1-c+a; 1-b+a; \frac{1}{z}\right) \\ &+ \frac{\Gamma(c) \Gamma(a-b)}{\Gamma(a) \Gamma(c-b)} \,(-z)^{-b\phantom{a}} F\left(\,b, 1-c+b; 1-a+b; \,\frac{1}{z}\right) \\ \end{align*}

Related posts

[1] It turns out these transformations are linear, but not as functions of a complex argument. They’re linear as transformations on a projective space. More on that here.

[2] A&S refers to the venerable Handbook of Mathematical Functions by Abramowitz and Stegun.

Bessel zero spacing

Bessel functions are to polar coordinates what sines and cosines are to rectangular coordinates. This is why Bessel function often arise in applications with radial symmetry.

The locations of the zeros of Bessel functions are important in application, and so you can find software for computing these zeros in mathematical libraries. In days gone by you could find them in printed tables, such as Table 9.5 in A&S.

Bessel functions are solutions to Bessel’s differential equation,

x^2 y'' + x y' + (x^2 - \nu^2) y = 0

For each ν the functions Jν and Yν, known as the Bessel functions of the first and second kind respectively, form a basis for the solutions to Bessel’s equation. These functions are analogous to cosine and sine

As x → ∞, Bessel functions asymptotically behave like damped sinusoidal waves. Specifically,

\begin{align*} J_\nu(x) &\sim \sqrt{\frac{2}{\pi x}} \cos(x - \pi\nu/2 - \pi/4) \\ Y_\nu(x) &\sim \sqrt{\frac{2}{\pi x}} \sin(x - \pi\nu/2 - \pi/4) \end{align*}

So if for large x Bessel functions of order ν behave something like sin(x), you’d expect the spacing between the zeros of the Bessel functions to approach π, and this is indeed the case.

We can say more. If ν² > ¼ then the spacing between zeros decreases toward π, and if ν² < ¼ the spacing between zeros increases toward π. This is not just true of Jν and Yν but also of their linear combinations, i.e. to any solution of Bessel’s equation with parameter ν.

If you look carefully, you can see this in the plots of J0 and J1 below. The solid blue curve, the plot of J0, crosses the x-axis at points closer together than π, and dashed orange curve, the plot of J1, crosses the x-axis at points further apart than π.

For more on the spacing of Bessel zeros see [1].

Related posts

[1] F. T. Metcalf and Milos Zlamal. On the Zeros of Solutions to Bessel’s Equation. The American Mathematical Monthly, Vol. 73, No. 7, pp. 746–749

Addition theorems for Dixon functions

The last couple blog posts have been about Dixon elliptic functions, functions which are analogous in some ways to sine and cosine functions. Whereas sine and cosine satisfy a Pythagorean identity

\sin^2(z) + \cos^2(z) = 1

the Dixon functions sm and cm satisfy what you might call a Fermat identity

\text{sm}^3(z) + \text{cm}^3(z) = 1

alluding to Fermat’s last theorem.

The functions sm and cm also satisfy addition identities, but these look very different than the addition identities for sine and cosine.


\begin{align*}
\text{sm}(x + y) &= \frac
{ \text{sm}^2(x)\,\text{cm}(y)- \text{cm}(x)\,\text{sm}^2(y)}
{ \text{sm}(x)\,\text{cm}^2(y)- \text{cm}^2(x)\,\text{sm}(y)}
\\
& \\
\text{cm}(x + y) &= \frac
{ \text{sm}(x)\,\text{cm}(x)- \text{sm}(y)\,\text{cm}(y)}
{ \text{sm}(x)\,\text{cm}^2(y)- \text{cm}^2(x)\,\text{sm}(y)}
\\
\end{align*}

Once you’ve seen the binomial theorem and the addition identities for trig functions, you might come away with the impression that it is common to be able to simply relate the value of a function at x + y to its values at x and at y. It is not.

There are only three classes of functions that satisfy addition theorems. (See this post for a precise definition of what is meant by an addition theorem.) And once you’ve seen the binomial theorem and the sum angle identities, you’ve seen representatives of two of the three classes. The three classes of functions with addition theorems for functions of z are

  1. Rational functions of z
  2. Rational functions of exp(λz)
  3. Elliptic functions of z

The binomial theorem is an example of the first category and sum angle identities are examples of he second category (with λ = i). Dixon functions are examples of the third category.