Bessel zero spacing

Bessel functions are to polar coordinates what sines and cosines are to rectangular coordinates. This is why Bessel function often arise in applications with radial symmetry.

The locations of the zeros of Bessel functions are important in application, and so you can find software for computing these zeros in mathematical libraries. In days gone by you could find them in printed tables, such as Table 9.5 in A&S.

Bessel functions are solutions to Bessel’s differential equation,

x^2 y'' + x y' + (x^2 - \nu^2) y = 0

For each ν the functions Jν and Yν, known as the Bessel functions of the first and second kind respectively, form a basis for the solutions to Bessel’s equation. These functions are analogous to cosine and sine

As x → ∞, Bessel functions asymptotically behave like damped sinusoidal waves. Specifically,

\begin{align*} J_\nu(x) &\sim \sqrt{\frac{2}{\pi x}} \cos(x - \pi\nu/2 - \pi/4) \\ Y_\nu(x) &\sim \sqrt{\frac{2}{\pi x}} \sin(x - \pi\nu/2 - \pi/4) \end{align*}

So if for large x Bessel functions of order ν behave something like sin(x), you’d expect the spacing between the zeros of the Bessel functions to approach π, and this is indeed the case.

We can say more. If ν² > ¼ then the spacing between zeros decreases toward π, and if ν² < ¼ the spacing between zeros increases toward π. This is not just true of Jν and Yν but also of their linear combinations, i.e. to any solution of Bessel’s equation with parameter ν.

If you look carefully, you can see this in the plots of J0 and J1 below. The solid blue curve, the plot of J0, crosses the x-axis at points closer together than π, and dashed orange curve, the plot of J1, crosses the x-axis at points further apart than π.

For more on the spacing of Bessel zeros see [1].

Related posts

[1] F. T. Metcalf and Milos Zlamal. On the Zeros of Solutions to Bessel’s Equation. The American Mathematical Monthly, Vol. 73, No. 7, pp. 746–749

Addition theorems for Dixon functions

The last couple blog posts have been about Dixon elliptic functions, functions which are analogous in some ways to sine and cosine functions. Whereas sine and cosine satisfy a Pythagorean identity

\sin^2(z) + \cos^2(z) = 1

the Dixon functions sm and cm satisfy what you might call a Fermat identity

\text{sm}^3(z) + \text{cm}^3(z) = 1

alluding to Fermat’s last theorem.

The functions sm and cm also satisfy addition identities, but these look very different than the addition identities for sine and cosine.


\begin{align*}
\text{sm}(x + y) &= \frac
{ \text{sm}^2(x)\,\text{cm}(y)- \text{cm}(x)\,\text{sm}^2(y)}
{ \text{sm}(x)\,\text{cm}^2(y)- \text{cm}^2(x)\,\text{sm}(y)}
\\
& \\
\text{cm}(x + y) &= \frac
{ \text{sm}(x)\,\text{cm}(x)- \text{sm}(y)\,\text{cm}(y)}
{ \text{sm}(x)\,\text{cm}^2(y)- \text{cm}^2(x)\,\text{sm}(y)}
\\
\end{align*}

Once you’ve seen the binomial theorem and the addition identities for trig functions, you might come away with the impression that it is common to be able to simply relate the value of a function at x + y to its values at x and at y. It is not.

There are only three classes of functions that satisfy addition theorems. (See this post for a precise definition of what is meant by an addition theorem.) And once you’ve seen the binomial theorem and the sum angle identities, you’ve seen representatives of two of the three classes. The three classes of functions with addition theorems for functions of z are

  1. Rational functions of z
  2. Rational functions of exp(λz)
  3. Elliptic functions of z

The binomial theorem is an example of the first category and sum angle identities are examples of he second category (with λ = i). Dixon functions are examples of the third category.

Conformal map between disk and equilateral triangle

The Dixon elliptic functions sm and cm are in some ways analogous to sine and cosine. However, whereas sine and cosine satisfy

\sin^2(z) + \cos^2(z) = 1

the Dixon functions satisfy

\text{sm}^3(z) + \text{cm}^3(z) = 1

The exponent 3 foreshadows the fact that these functions have a sort of three-fold symmetry. In particular, the function sm maps an equilateral triangle in the complex plane to the unit circle. The function sm gives a conformal map from the interior of this circle to the interior of the unit disk.

In this post we will work with sm−1 rather than sm, mapping the unit circle to an equilateral triangle. An advantage of working with the inverse function is that we can start with the unit circle and see what triangle it maps to; if we started with the triangle it might seem arbitrary. Also, the function sm is not commonly part of mathematical software libraries—it’s not in Mathematica or SciPy—but you can compute its inverse via

\text{sm}^{-1}(z) = {}_2F_1(\tfrac{1}{3}, \tfrac{2}{3}; \tfrac{4}{3}; z^3) \, z

using the hypergeometric function 2F1, which is a common part of mathematical libraries.

The following image shows concentric circles in the z plane and their image under sm−1 in the w plane, w = sm−1(z).

Conformal map of unit disk to equilateral triangle using the inverse of the Dixon elliptic function sm

If we were to use this in applications, we’d need to know the vertices of the image triangle so we could do a change of variables to transform this triangle into a particular triangle we’re interested in.

The centroid of the image is at the origin, and the right-most vertex is at approximately 1.7666. To be exact, the vertex is at

v = ⅓ B(⅓, ⅓)

where B is the beta function. (Notice all the 3’s in the formula for v.) The other two vertices are at exp(2π/3)v and exp(4πi/3) v.

One way this conformal map could arise in practice is solving Laplace’s equation on a triangle. You can solve Laplace’s equation on a disk in closed form, and transform that solution into a solution on the triangle.

Related posts

Python code for means

The last couple article have looked at various kinds of mean. The Python code for four of these means is trivial:

gm  = lambda a, b: (a*b)**0.5
am  = lambda a, b: (a + b)/2
hm  = lambda a, b: 2*a*b/(a+b)
chm = lambda a, b: (a**2 + b**2)/(a + b)

But the arithmetic-geometric mean (AGM) is not trivial:

from numpy import pi
from scipy.special import ellipk

agm = lambda a, b: 0.25*pi*(a + b)/ellipk((a - b)**2/(a + b)**2) 

The arithmetic-geometric mean is defined by iterating the arithmetic and geometric means and taking the limit. This iteration converges very quickly, and so writing code that directly implements the definition is efficient.

But the AGM can also be computed via a special function K, the “complete elliptic integral of the first kind,” which makes the code above more compact. This is conceptually nice because we can think of the AGM as a simple function, not an iterative process.

But how is K evaluated? In some sense it doesn’t matter: it’s encapsulated in the SciPy library. But someone has to write SciPy. I haven’t looked at the SciPy source code, but usually K is calculated numerically using the AGM because, as we said above, the AGM converges very quickly.

Bell curve meme: How to calculate the AGM? The left and right tails say to use a while loop. The middle says to evaluate a complete ellliptic integral of the first kind.

This fits the pattern of a bell curve meme: the novice and expert approaches are the same, but for different reasons. The novice uses an iterative approach because that directly implements the definition. The expert knows about the elliptic integral, but also knows that the iterative approach suggested by the definition is remarkably efficient and eliminates the need to import a library.

Although it’s easy to implement the AGM with a while loop, the code above does have some advantages. For one thing, it pushes the responsibility for validation and exception handling onto the library. On the other hand, the code is easy to get wrong because there are two conventions on how to parameterize K and you have to be sure to use the same one your library uses.

Addition theorems

Earlier this week I wrote about several ways to generalize trig functions. Since trig functions have addition theorems like

\begin{align*} \sin(\theta \pm \varphi) &= \sin\theta \cos\varphi \pm \cos\theta \sin\varphi \\ \cos(\theta \pm \varphi) &= \cos\theta \cos\varphi \mp \sin\theta \sin\varphi \\ \tan(\theta \pm \varphi) &= \frac{\tan\theta \pm \tan\varphi}{1 \mp \tan\theta \tan\varphi} \end{align*}

a natural question is whether generalized trig functions also have addition theorems.

Hyperbolic functions have well-known addition theorems analogous to the addition theorems above. This isn’t too surprising since circular and hyperbolic functions are fundamentally two sides of the same coin.

I mentioned that the lemniscate functions satisfy many identities but didn’t give any examples. Here are addition theorems satisfied by the lemniscate sine sl and the lemniscate cosine cl.

\begin{aligned} \text{cl}\,(x+y) &= \frac{\text{cl}\,x\, \text{cl}\,y - \text{sl}\,x\, \text{sl}\,y} {1 + \text{sl}\,x\, \text{cl}\,x\, \text{sl}\,y\, \text{cl}\,y} \\ \text{sl}\,(x+y) &= \frac{\text{sl}\,x\, \text{cl}\,y + \text{cl}\,x\, \text{sl}\,y} {1 - \text{sl}\,x\, \text{cl}\,x\, \text{sl}\,y\, \text{cl}\,y} \end{aligned}

Addition theorems for sinp and friends are harder to come by. In [1] the authors say “no addition formula for sinp is known to us” but they did come up with a double-argument theorem for a special case of sinp,q:

\sin_{4/3, 4}(2x) = \frac{2 \sin_{4/3, 4}(x)\, (\cos_{4/3, 4}(x))^{1/3}}{\left( 1 + 4(\sin_{4/3, 4}(x))^4 \,(\cos_{4/3, 4}(x))^{4/3} \right)^{1/2}}

There is a deep reason why the lemniscate and hyperbolic functions have addition theorems and sinp does not, namely a theorem of Weierstrass. This theorem says that a meromorphic function has an algebraic addition theorem if and only if it is an elliptic function of z, a rational function of z, or a rational function of exp(λz).

The leminscate functions have addition theorems because they are elliptic functions. Circular and hyperbolic functions have addition theorems because they are rational functions of exp(iz). But sinp does not have an addition theorem because it is not elliptic, rational, or a rational function of exp(λz). It’s possible that sinp has some sort of addition theorem that falls outside of Weiersrass’ theorem, i.e. an addition theorem using a non-algebraic function.

You may have noticed that the addition rule for sine involves not only sine but also cosine. But using the Pythagorean identity we can turn an addition rule involving sines and cosines into one only involving sines. Similarly, we can use a Pythagorean-like theorem to turn the identities involving sl and cl into identities involving only one of these functions.

Elliptic functions satisfy addition theorems, and functions satisfying addition theorems are elliptic (or the other two cases of Weierstrass’ theorem). Rational functions of x and rational functions of exp(λz) are easy to spot, so if you see an unfamiliar function that has an algebraic addition theorem, you know it’s an elliptic function. If you saw the addition theorems for sl and cl before knowing what these functions are, you could say to yourself that these are probably elliptic functions.

You may see other theorems called addition theorems. For example, the gamma function satisfies an addition theorem, although it is not elliptic or rational. But this is a restricted kind of addition theorem: it applies to x + 1 and not to general x + y. Also, the Bessel functions have addition theorems, but these theorems involve infinite sums; they are not algebraic addition theorems.

[1] David E. Edmunds, Petr Gurka, Jan Lang. Properties of generalized trigonometric functions. Journal of Approximation Theory 164 (2012) 47–56.

p-norm trig functions and “squigonometry”

This is the fourth post in a series on generalizations of sine and cosine.

The first post looked at defining sine as the inverse of the inverse sine. The reason for this unusual approach is that the inverse sine is given in terms of an arc length and an integral. We can generalize sine by generalizing this arc length and/or generalizing the integral.

The first post mentioned that you could generalize the inverse sine by replacing “2” with “p” in an integral. Specifically, the function

F_p(x) = \int_0^x (1 - |t|^p)^{-1/p} \,dt

is the inverse sine when p = 2 and in general is the inverse of the function sinp. Unfortunately, there two different ways to define sinp. We next present a generalization that includes both definitions as special cases.

Edmunds, Gurka, and Lang [1] define the function

F_{p,q}(x) = \int_0^x (1 - t^q)^{-1/p} \,dt

and define sinp,q to be its inverse.

The definition of sinp at the top of the post corresponds to sinp,q with p = q in the definition of Edmunds et al.

The other definition, and the one we’ll use for the rest of the post, corresponds to sinr,s where s = p and r = (p-1)/p.

This second definition sinp has a geometric interpretation analogous to that in the previous post for hyperbolic functions [2]. That is, we start at (1, 0) and move clockwise along the p-norm circle until we sweep out an area of α/2. When we have swept out that much area, we are at the point (cosp α, sinp α).

When p = 4, the p-norm circle is also known as a “squircle,” and the p-norm sine and cosine analogs are sometimes placed under the heading “squigonometry.”

Previous posts in the series

[1] David E. Edmunds, Petr Gurka, Jan Lang. Properties of generalized trigonometric functions. Journal of Approximation Theory 164 (2012) 47–56.

[2] Chebolu et al. Trigonometric functions in the p-norm https://arxiv.org/abs/2109.14036

Lemniscate functions

In the previous post I said that you could define the inverse sine as the function that gives the arc length along a circle, then define sine to be the inverse of the inverse sine. The purpose of such a backward definition is that it generalizes to other curves besides the circle. For example, it generalizes to the lemniscate, a curve studied by Bernoulli.

The leminiscate in rectangular coordinates satisfies

(x^2 + y^2)^2 = x^2 - y^2

and in polar coordinates

r^2 = \cos 2\theta

The function arcsl(x), analogous to arcsin(x), is defined as the length of the arc along the leminiscate from the origin to the point (x, y). The length of the arc from (x, y) to the x-axis is arccl(x).

\begin{align*} \mbox{arcsl}(x) &= \int_0^x \frac{dt}{\sqrt{1 - t^4}} \\ \mbox{arccl}(x) &= \int_x^1 \frac{dt}{\sqrt{1 - t^4}} \\ \end{align*}

The lemniscate sine, sl, is the inverse of arcsl, and the lemniscate cosine, cl, is the inverse of arccl. These functions were first studied by Giulio Fagnano three centuries ago.

The lemniscate functions sl and cl are elliptic functions, and so they have a lot of nice properties and satisfy a lot of identities. See Wikipedia, for example. Update: see this follow up post on addition theorems.

Lemniscate constant

As mentioned in the previous post, generalizations of the sine and cosine functions have corresponding generalizations of π.

Just as the period of sine and cosine is 2π, the period of lemninscate sine and lemniscate cosine is 2ϖ.

The number ϖ is called the lemniscate constant. It is written with Unicode character U+03D6, GREEK SMALL LETTER OMEGA PI. The LaTeX command command is \upvarpi.

The lemnmiscate constant ϖ is related to Gauss’ constant G by ϖ = πG.

The area of a squircle is √2 ϖ.

There is also a connection to the beta function: 2ϖ = B(1/4, 1/2).

Generalized trigonometry

In a recent post I mentioned in passing that trigonometry can be generalized from functions associated with a circle to functions associated with other curves. This post will go into that a little further.

The equation of the unit circle is

x^2 + y^2 = 1

and so in the first quadrant

y = \sqrt{1 - x^2}

The length of an arc from (1, 0) to (cos θ, sin θ) is θ. If we write the arc length as an integral we have

\int_0^{\sin \theta} (1 -t^2)^{-1/2} \,dt = \theta

and so

F(x) = \int_0^x (1 - t^2)^{-1/2} \,dt

is the inverse sine of x. Sine is the inverse of the inverse of sine, so we could define the sine function to be the inverse of F.

This would be a complicated way to define the sine function, but it suggests ways to create variations on sine: take the length of an arc along a curve other than the circle, and call the inverse of this function a new kind of sine. Or tinker with the integral defining F, whether or not the resulting integral corresponds to the length along a familiar curve, and use that to define a generalized sine.

Example: sinp

We can replace the 2’s in the integral above with p‘s, defining Fp as

F_p(x) = \int_0^x (1 - |t|^p)^{-1/p} \,dt

and defining sinp to be the inverse of Fp. When p = 2, sinp(x) = sin(x). This idea goes back to E. Lungberg in 1879.

The function sinp has its applications. For example, just as the sine function is an eigenfunction of the Laplacian, sinp is an eigenfunction of the p-Laplacian.

We can extend sinp to be a periodic function with period 4Fp(1). The constants πp are defined as 2Fp(1) so that sinp has period πp and π2 = π.

Future posts

I intend to explore several generalizations of sine and cosine. What happens if you replace a circle with an ellipse or a hyperbola? Or a squircle? How do these variations on sine and cosine compare to the originals? Do they satisfy analogous identities? How do they appear in applications? I’d like to address some of these questions in future posts.

Belt around an elliptical waist

I just saw a tweet from Dave Richeson saying

I remember as a kid calculating the size difference (diameter) of a belt between each hole. Now I think about it every time I wear a belt.

Holes 1 inch apart change the diameter by about one-third of an inch (1/π). [Assuming people have a circular waistline 🤣]

People do not have circular waistlines, unless they are obese, but the circular approximation is fine for reasons we’ll show below.

Robust approximations

Good simplifications, such as approximating a human waist by a circle, are robust. It doesn’t matter how well a circle approximates a waistline but rather how well the conclusion assuming a circular waistline approximates the conclusion for a real waistline.

There’s a joke that physicists say things like “assume a spherical cow.” Obviously cows are not spherical, but depending on the context, assuming a spherical cow may be a very sensible thing to do.

Elliptical waistlines

A human waistline may be closer to an ellipse than a circle. It’s not an ellipse either—it varies from person to person—but my point here is to show that using a different model results in a similar conclusion.

For a circle, the perimeter equals π times the diameter. So an increase of 1 inch in the diameter corresponds to an increase of 1/π in the perimeter, as Dave said.

Suppose we increase the perimeter of an ellipse by 1 and keep the aspect ratio of the ellipse the same. How much do the major and minor axes change?

The answer will depend on the aspect ratio of the ellipse. I’m going to guess that the aspect ratio is maybe 2 to 1. This corresponds to eccentricity e equal to 0.87.

The ratio of the perimeter of an ellipse to its major axis is 2E(p) where E is the complete elliptic integral of the second kind. (See, there’s a good reason Dave used a circle rather than an ellipse!)

For a circle, the eccentricity is 0, and E(0) = π/2, so the ratio of perimeter to the major axis (i.e. diameter) is π. For eccentricity 0.87 this ratio is 2.42. So a change in belt size of 1 inch would correspond to a change in major axis of 0.41 and a change in minor axis of 0.21.

Dave’s estimate of 1/3 of an inch the average of these two values. If you average the major and minor axes of an ellipse and call that the “diameter” then Dave’s circular model comes to about the same conclusion as our elliptical model, but avoids having to use elliptic integrals.

Perimeter to average axis ratio

The following graph shows the ratio of perimeter to average axis length for an ellipse. On the left end, aspect 1, we have a circle and the ratio is π. As the aspect ratio goes to infinity, the limiting value is 4.

Even for substantial departures from a circle, such as a 2 : 1 or 3 : 1 aspect ratio, the ratio isn’t far from π.

Related posts

Area and volume of hypersphere cap

A spherical cap is the portion of a sphere above some horizontal plane. For example, the polar ice cap of the earth is the region above some latitude. I mentioned in this post that the area above a latitude φ is

A = 2\pi R^2(1-\sin\varphi)

where R is the earth’s radius. Latitude is the angle up from the equator. If we use the angle θ down from the pole, we get

A = 2\pi R^2(1-\cos\theta)

I recently ran across a generalization of this formula to higher-dimensional spheres in [1]. This paper uses the polar angle θ rather than latitude φ. Throughout this post we assume 0 ≤ θ ≤ π/2.

The paper also includes a formula for the volume of a hypersphere cap which I will include here.

Definitions

Let S be the surface of a ball in n-dimensional space and let An(R) be its surface area.

A_n(R) = \frac{\pi^{n/2}}{\Gamma(n/2)} R^{n-1}

Let Ix(a, b) be the incomplete beta function with parameters a and b evaluated at x. (This notation is arcane but standard.)

I_x(a, b) = \int_0^x t^{a-1}\, (1-t)^{b-1}\, dt

This is, aside from a normalizing constant, the CDF function of a beta(a, b) random variable. To make it into the CDF, divide by B(a, b), the (complete) beta function.

B(a, b) = \int_0^1 t^{a-1}\, (1-t)^{b-1}\, dt

Area equation

Now we can state the equation for the area of a spherical cap of a hypersphere in n dimensions.

A_n^{\text{cap}}(R) = \frac{1}{2}A_n(R)\, I_{\sin^2\theta}\left(\frac{n-1}{2}, \frac{1}{2} \right )

Recall that we assume the polar angle θ satisfies 0 ≤ θ ≤ π/2.

It’s not obvious that this reduces to the equation at the top of the post when n = 3, but it does.

Volume equation

The equation for the volume of the spherical cap is very similar:

V_n^{\text{cap}}(R) = \frac{1}{2}V_n(R)\, I_{\sin^2\theta}\left(\frac{n+1}{2}, \frac{1}{2} \right )

where Vn(R) is the volume of a ball of radius R in n dimensions.

V_n(R) = \frac{\pi^{n/2}}{\Gamma\left(\frac{n}{2} + 1\right)} R^n

Related posts

[1] Shengqiao Li. Concise Formulas for the Area and Volume of a Hyperspherical Cap. Asian Journal of Mathematics and Statistics 4 (1): 66–70, 2011.