Complex analysis is filled with theorems that seem too good to be true. One is that if a complex function is once differentiable, it’s infinitely differentiable. How can that be? Someone asked this on math.stackexchange and this was my answer.

The existence of a complex derivative means that locally a function can only rotate and expand. That is, in the limit, disks are mapped to disks. This rigidity is what makes a complex differentiable function infinitely differentiable, and even more, analytic.

For a complex derivative to exist, it has to exist and have the same value for all ways the “h” term can go to zero in (f(z+h) – f(z))/h. In particular, h could approach 0 along any radial path, and that’s why an analytic function must map disks to disks in the limits.

By contrast, an infinitely differentiable function of two real variables could map a disk to an ellipse, stretching more in one direction than another. An analytic function can’t do that.

A smooth function of two variables could also flip a disk over, such as f(x, y) = (x, -y). An analytic function can’t do that either. That’s why complex conjugation is not an analytic function.

You might think that if complex differentiability is so restrictive, there must not be many complex differentiable functions. And yet nearly every function you’ve run across in school — trig functions, logs, polynomials, classical probability distributions, etc. — are all differentiable when extended to functions of a complex variable. According to this quote, “95 percent, if not 100 percent, of the functions studied by physics, engineering, and even mathematics students” are hypergeometric, a very special case of complex differentiable functions.

Contact me if you’d like help with complex analysis.

Thank you, John!

Have you seen my question here:

http://mathoverflow.net/questions/66377/why-is-differentiating-mechanics-and-integration-art

I would love to see an answer from you! You have the insight, intuition and pedagogical talent to give a great answer 🙂

vonjd: There are some awfully deep answers to your question. I’d like to go back and read them carefully.

My elementary answer is that there’s no product rule or chain rule for integrals. (Except in probability with independent random variables. That’s why independence makes this so easy: there’s a product rule!)

“You might think that if complex differentiability is so restrictive, there must not be many complex differentiable functions.”

Fun fact: Differentiability for quaternions functions, f: H -> H, is even more restrictive. There are four “H-Cauchy-Reimann” equations, each requiring four derivatives to be equal to each other. You end up with lines (z –> a*z + b) being the only functions that are H-differentiable on any open set.

(And because of non-commutativity, you have to choose whether you want to talk about left-diff’ble or right-diff’ble. Which you choose leads you to either a*z+b or z*a+b being the diff’ble linear functions.)

I can see that the requirement of mapping disks to disks is restrictive, and that the requirement that a function be infinitely differentiable is restrictive as well. But why these two restrictions coincide is not obvious to me. I would explain it to myself via the Cauchy integral formula, but that’s not intuitive to me either.

Another way to define holomorphic function, f(z), is this: f is holomorphic on a domain if ∂f/∂z* = 0 at all points in the domain (where z* is the complex conjugate of z).

So, for f(z) = z*, the above condition fails.

Hi John

Does this idea extend to functions based on complex functions like Laplace transforms?

Thanks

Peter

Great answer. Have you ever seen Visual Complex Analysis by Tristan Needham. He discusses differentiation as the “amplitwist,” similar to your description here.