Is there a way to make sense of the nth derivative of a function when n is not a positive integer?
The notation f(n) is usually introduced in calculus classes in order to make Taylor’s theorem easier to state:
To make the above statement work, the 0th derivative is defined to be the function itself, i.e. don’t take any derivatives. This makes a modest extension of the notation f(n) from requiring n to be a positive integer to being a non-negative integer. Can we make sense of the case when n is negative or non-integer? Before answering that question, let’s think about what the fractional derivative might be in some special cases if we could define it.
When we take derivatives of powers of x, we get factorial-like coefficients. The first derivative of xm is m xm-1, the second derivative is m(m-1) xm-2, the third derivative is m(m-1)(m-2) xm-3, etc. We can use the gamma function to extend the factorial function to non-integer argument, so maybe we could do the same to compute non-integer derivatives. If m > -1, and n is a positive integer, the nth derivative of xm is (m!/(m–n)!) xm–n. We could rewrite this as (Γ(m+1) / Γ(m–n+1)) xm–n. The result holds for integer values of n, and so we could hope it holds for non-integer values of n.
If n is an integer and we take the nth derivative of ebx we get bn ebx. We might guess that for non-integer values of n the same formula holds.
It is indeed possible to define derivatives of order n for non-integer values of n, and the speculations above are correct, subject to some conditions. In fact there are several ways to define non-integer derivatives and the differences can be complicated.
What about negative derivatives? Well, it makes sense that these could be anti-derivatives, i.e. integrals. We could define, for example, the -3rd derivative of f(x) to be a function whose third derivative is f(x). However, anti-derivatives are only determined up to a constant. We could use the Fundamental Theorem of Calculus to uniquely specify anti-derivatives if we agree on a lower limit of integration c, such as c = 0 or maybe c = -∞.
Here’s one way fractional derivatives could be defined. Suppose the Fourier transform of f(x) is g(ξ). Then for positive integer n, the nth derivative of f(x) has Fourier transform (2π i ξ)n g(ξ). So you could take the nth derivative of f(x) as follows: take the Fourier transform, multiply by (2π i ξ)n, and take the inverse Fourier transform. This suggests the same procedure could be used to define the nth derivative when n is not an integer.
Fractional derivatives have practical uses. The book An Atlas of Functions makes frequent use of fractional derivatives, especially derivatives of order 1/2 and –1/2, to show connections between different classes of functions.
Related article: Generalizing binomial coefficients
6 thoughts on “Fractional derivatives”
Nice post! Things like this are fun, and remind me of trying to overload functions and operators in programming languages to get generics.
I suppose that you could even extend that idea to a “complex derivative.” I suppose that the Taylor expansion would get kind of tricky. :-)
I see no immediate reason that you couldn’t do the same with the FT formulation for L_2 functions too.
Is there any chance you could pick a reasonably common function like sin(x) and give us some example fractional derivatives?
Great article John. You know you can make the hyphens – look nicer by typing −? −
Phil: The formulas suggested above for differentiating powers of x and exp(b x) do generalize to fractional derivatives. For powers, the exponent on x must be bigger than -1 and x must be > 0.
The formula for derivatives of exp(b x) suggested above also extends to fractional derivatives, but this time we require b > 0.
For sine and cosine, things are more complicated. The half derivatives of sine and cosine involve Fresnel integrals. The examples of powers of x and exp( b x) are atypical in that they have a simple form. In general, fractional derivatives involve hypergeometric functions if they can be computed at all.
Interestingly, a few years ago (more like 30), the concept of integrals of negative dimensions was introduced (by I G Halliday at Imperial I think). As an example, we know what a triple integral is. Now consider the integral over -3 dimensions. It turns out that these integrals are best defined as derivatives, rather like integrals over fermionic variables in supersymmetry theories.