When does a function equal its Taylor series?

Taylor’s theorem says

f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n

When does the thing on the left equal the thing on the right?

A few things could go wrong:

  • Maybe not all the terms on the right side exist, i.e. the function f might not be infinitely differentiable.
  • Maybe f is infinitely differentiable but the series diverges.
  • Maybe f is infinitely differentiable but the series converges to something other than f.

The canonical example of the last problem is the function g(x) defined to be exp(-1/x2) for positive x and 0 otherwise. This function is so flat when it approaches the origin that all derivatives are zero there. So all the coefficients in Taylor’s formula are zero. But the function g is positive for positive x.

Everything above can be found in a standard calculus text, but the following results are hardly known.

Being infinitely differentiable is necessary but not sufficient for a function to be real analytic. What additional requirements would be sufficient?

We start by examining what went wrong with the function g(x) above. It has a Taylor series at every point, but the radii of convergence go to zero as we get close to the origin. At the origin it is not analytic because the radius of convergence collapses to 0.

Alfred Pringsheim claimed that it is enough to require that the radii of convergence be bounded below on an interval. If we let ρ(x) be the radius of convergence for a series centered at x, then a function f is analytic in an open interval J if ρ(x) has some lower bound δ > 0 in J. Pringsheim’s theorem was correct, but his proof was flawed. The proof was accepted for 40 years until Ralph Boas discovered the flaw.

Here is a generalization of Pringsheim’s theorem. (It’s not clear to me from my source [1] who first proved this theorem. Perhaps Boas, but in the same context Boas mentions the work of others on the problem.)

Let f be infinitely differentiable in an open interval J. Suppose the radius of convergence ρ(x) for a Taylor series centered at x is positive for each x in the interval J. Suppose also that for every point p in J we have

\liminf_{x\to p} \frac{\rho(x)}{|x-p|} > 1.

Then f is analytic on the interval J.

Note that if ρ(x) is bounded below by δ > 0 then the limit above is infinite and so Pringsheim’s theorem follows as a special case.

The function g(x) above does not satisfy the hypothesis of the theorem on any open interval around 0 because if we set p = 0, the limit above is 1, not greater than 1.

* * *

[1] R. P. Boas. When Is a C Function Analytic? Mathematical Intelligencer, Vol 11, No. 4, pp. 34–37.

 

Valuing results and information

Chris Wiggins gave an excellent talk at Rice this afternoon on data science at the New York Times. In the Q&A afterward, someone asked how you would set up a machine learning algorithm where you’re trying to optimize for outcomes and for information.

Here’s how I’ve approached this dilemma in the past. Information and outcomes are not directly comparable, so any objective function combining the two is going to add incommensurate things. One way around this is to put a value not on information per se but on what you’re going to do with the information.

In the context of a clinical trial, you want to treat patients in the trial effectively, and you want a high probability of picking the best treatment at the end. You can’t add patients and probabilities meaningfully. But why do you want to know which treatment is best? So you can treat future patients effectively. The value of knowing which treatment is best is the increase in the expected number of future successful treatments. This gives you a meaningful objective: maximize the expected number of effective treatments, of patients in the trial and future patients treated as a result of the trial.

The hard part is guessing what will be done with the information learned in the trial. Of course this isn’t exactly known, but it’s the key to estimating the value of the information. If nobody will be treated any differently in the future no matter how the trial turns out—and unfortunately this may be the case—then the trial should be optimized strictly for the benefit of the people in the trial. On the other hand, if a trial will determine the standard of care for thousands of future patients, then there is more value in picking the best treatment.

Computing discrete logarithms with baby-step giant-step algorithm

At first “discrete logarithm” sounds like a contradiction in terms. Logarithms aren’t discrete, not as we usually think of them. But you can define and compute logarithms in modular arithmetic.

What is a logarithm? It’s the solution to an exponential equation. For example, the logarithm base 10 of 2 is the solution to the equation 10x = 2, and so x =0.30103. Similarly, you could look for the logarithm base 10 of 2 modulo 19. This is an integer value of x such that 10x = 2 mod 19, and you can show that 17 is a solution.

If we work modulo an integer n, the discrete logarithm base a of a number y is an integer x such that ax = y. For some values of a there may not be a solution, but there will be a solution if a is a generator of the integers mod n.

Brute force the simplest algorithm for finding discrete logarithms: try x = 1, 2, 3, …, n until you find a value of x satisfying ax = y. The problem with brute force is that the expected run time is on the order of n, and n is often very large in application.

Discrete logarithms are expensive to compute, but we can do better than brute force. (Cryptographic applications take advantage of the fact that discrete logarithms are hard to compute.) There’s a simple algorithm by Daniel Shanks, known as the baby-step giant-step algorithm, that reduces the run time from order n to order roughly √n. (Actually O(√n log n) for reasons we’ll see soon.)

Let s be the ceiling of the square root of n. The idea is to search for x by taking giant steps forward (multiples of s) and baby (single) steps backward.

Let G be the set of pairs (ags mod n, gs) where g ranges from 1 to s. These are the giant steps.

Let B be the set of pairs (yab mod n, b) where b ranges from 0 to s-1. These are the baby steps.

Now look for a pair in G and a pair in B with the matching first components, i.e. yab = ags. Then x = gsb is the required solution.

Constructing the sets G and requires O(s) operations, and matching the two sets takes O(s log s) operations.

Here’s an example, going back to the problem above of finding the discrete log base 10 of 2 mod 19, using a little Python code to help with the calculations.

The square root of 19 is between 4 and 5 so s = 5.

     >>> [(2*10**r % 19, r) for r in range(5)]
     [(2, 0), (1, 1), (10, 2), (5, 3), (12, 4)]
     >>> [(10**(4*t) % 19, 4*t) for t in range(1,6)]
     [(6, 4), (17, 8), (7, 12), (4, 16), (5, 20)]

The number 5 appears as the first element of a pair in both B and G. We have (5, 3) in B and (5, 20) in G so x = 20 – 3 = 17.

Related: Applied number theory

Interim analysis, futility monitoring, and predictive probability

An interim analysis of a clinical trial is an unusual analysis. At the end of the trial you want to estimate how well some treatment X works. For example, you want to how likely is it that treatment X works better than the control treatment Y. But in the middle of the trial you want to know something more subtle.

It’s possible that treatment X is doing so poorly that you want to end the trial without going any further. It’s also possible that X is doing so well that you want to end the trial early. Both of these are rare. Most of the time an interim analysis is more concerned with futility. You might want to stop the trial early not because the results are really good, or really bad, but because the results are really mediocre! That is, treatments and Y are performing so similarly that you’re afraid that you won’t be able to declare one or the other better.

Maybe treatment X is doing a little better than Y, but not so much better that you can declare with confidence that X is better. You might want to stop for futility if you project that not only do you not have enough evidence now, you don’t believe you will have enough evidence by the end of the trial.

Futility analysis is more about resources than ethics. If X is doing poorly, ethics might dictate that you stop giving X to patients so you stop early. If X is doing spectacularly well, ethics might dictate that you stop giving the control treatment, if there is an active control. But if X is doing so-so, there’s usually not an ethical reason to stop, unless X is worse than Y on some secondary criteria, such as having worse side effects. You want to end futile studies so you can save resources and get on with the next study, and you could argue that’s an ethical consideration, though less direct.

Futility analysis isn’t about your current estimate of effectiveness. It’s about what you think you’re estimate regard effectiveness in the future. That is, it’s a second order prediction. You’re trying to understand the effectiveness of the trial, not of the treatment per se. You’re not trying to estimate a parameter, for example, but trying to estimate what range of estimates you’re likely to make.

This is why predictive probability is natural for interim analysis. You’re trying to predict outcomes, not parameters. (This is subtle: you’re trying to estimate the probability of outcomes that lead to certain estimates of parameters, namely those that allow you to reach a conclusion with pre-specified significance.)

Predictive probability is a Bayesian concept, but it is useful in analyzing frequentist trial designs. You may have frequentist conclusion criteria, such as a p-value threshhold or some requirements on a confidence interval, but you want to know how likely it is that if the trial continues, you’ll see data that lead to meeting your criteria. In that case you want to compute the (Bayesian) predictive probability of meeting your frequentist criteria!

Click to learn more about Bayesian statistics consulting

Periods of fractions

Suppose you have a fraction a/b where 0 < ab, and a and b are relatively prime integers. The decimal expansion of a/b either terminates or it has an initial non-repeating part followed by a repeating part.

How long is the non-repeating part? How long is the period of the repeating part?

The answer depends on the prime factorization of the denominator b. If b has the form

b = 2α 5β

then the decimal expansion has r digits where r = max(α, β).

Otherwise b has the factorization

b = 2α 5β d

where d is some integer greater than 1 and relatively prime to 10. Then the decimal expansion of a/b has r non-repeating digits and a repeating part with period s where r is as above, and s is the smallest positive integer such that

d | 10s– 1,

i.e. the smallest s such that 10s – 1 is divisible by d. How do we know there exists any such integer s? This isn’t obvious.

Fermat’s little theorem tells us that

d | 10d – 1 – 1

and so we could take s = d – 1, though this may not be the smallest such s. So not only does s exist, we know that it is at most d – 1. This means that the period of the repeating part of a/b is no more than d – 1 where d is what’s left after factoring out as many 2’s and 5’s from b as possible.

By the way, this means that you can take any integer d, not divisible by 2 or 5, and find some integer k such that dk consists only of 9’s.

Related post: Cyclic fractions

Speeding up R code

People often come to me with R code that’s running slower than they’d like. It’s not unusual to make the code 10 or even 100 times faster by rewriting it in C++.

Not all that speed improvement comes from changing languages. Some of it comes from better algorithms, eliminating redundancy, etc.

Why bother optimizing?

If code is running 100 times slower than you’d like, why not just run it on 100 processors? Sometimes that’s the way to go. But maybe the code doesn’t split up easily into pieces that can run in parallel. Or maybe you’d rather run the code on your laptop than send it off to the cloud. Or maybe you’d like to give your code to someone else and you want them to be able to run the code conveniently.

Optimizing vs rewriting R

It’s sometimes possible to tweak R code to make it faster without rewriting it, especially if it is naively using loops for things that could easily be vectorized. And it’s possible to use better algorithms without changing languages.

Beyond these high-level changes, there are a number of low-level changes that may give you a small speed-up. This way madness lies. I’ve seen blog posts to the effect “I rewrote this part of my code in the following non-obvious way, and for reasons I don’t understand, it ran 30% faster.” Rather than spending hours or days experimenting with such changes and hoping for a small speed up, I use a technique fairly sure to give a 10x speed up, and that is rewriting (part of) the code in C++.

If the R script is fairly small, and if I have C++ libraries to replace all the necessary R libraries, I’ll rewrite the whole thing in C++. But if the script is long, or has dependencies I can’t replace, or only has a small section where nearly all the time is spent, I may just rewrite that portion in C++ and call it from R using Rcpp.

Simulation vs analysis

The R programs I’ve worked on often compute something approximately by simulation that could be calculated exactly much faster. This isn’t because the R language encourages simulation, but because the language is used by statisticians who are more inclined to use simulation than analysis.

Sometimes a simulation amounts to computing an integral. It might be possible to compute the integral in closed form with some pencil-and-paper work. Or it might be possible to recognize the integral as a special function for which you have efficient evaluation code. Or maybe you have to approximate the integral, but you can do it more efficiently by numerical analysis than by simulation.

Redundancy vs memoization

Sometimes it’s possible to speed up code, written in any language, simply by not calculating the same thing unnecessarily. This could be something simple like moving code out of inner loops that doesn’t need to be there, or it could be something more sophisticated like memoization.

The first time it sees a function called with a new set of arguments, memoization saves the result and creates a way to associate the arguments with the result in some sort of look-up table, such as a hash. The next time the function is called with the same argument, the result is retrieved from memory rather than recomputed.

Memoization works well when the set of unique arguments is fairly small and the calculation is expensive relative to the cost of looking up results. Sometimes the set of potential arguments is very large, and it looks like memoization won’t be worthwhile, but the set of actual arguments is small because some arguments are used over and over.

 Related post:

The big deal about neural networks

In their book Computer Age Statistical Inference, Brad Efron and Trevor Hastie give a nice description of neutral networks and deep learning.

The knee-jerk response [to neural networks] from statisticians was “What’s the big deal? A neural network is just a nonlinear model, not too different from many other generalizations of linear models.”

While this may be true, neural networks brought a new energy to the field. They could be scaled up and generalized in a variety of ways … And most importantly, they were able to solve problems on a scale far exceeding what the statistics community was used to. This was part computing scale expertise, part liberated thinking and creativity on the part of this computer science community.

After enjoying considerable popularity for a number of years, neural networks were somewhat sidelined by new inventions in the mid 1990’s. … Neural networks were passé. But then they re-emerged with a vengeance after 2010…the reincarnation now being called deep learning.

Gentle introduction to R

The R language is closely tied to statistics. It’s ancestor was named S, because it was a language for Statistics. The open source descendant could have been named ‘T’, but its creators chose to call it’R.’

Most people learn R as they learn statistics: Here’s a statistical concept, and here’s how you can compute it in R. Statisticians aren’t that interested in the R language itself but see it as connective tissue between commands that are their primary interest.

This works for statisticians, but it makes the language hard for non-statisticians to approach. Years ago I managed a group of programmers who supported statisticians. At the time, there were no books for learning R without concurrently learning statistics. This created quite a barrier to entry for programmers whose immediate concern was not the statistical content of an R program.

Now there are more books on R, and some are more approachable to non-statisticians. The most accessible one I’ve seen so far is Learning Base R by Lawrence Leemis. It gets into statistical applications of R—that is ultimately why anyone is interested in R—but it doesn’t start there. The first 40% or so of the book is devoted to basic language features, things you’re supposed to pick up by osmosis from a book focused more on statistics than on R per se. This is the book I wish I could have handed my programmers who had to pick up R.

Turning math inside-out

Here’s one of the things about category theory that takes a while to get used to.

Mathematical objects are usually defined internally. For example, the Cartesian product P of two sets A and B is defined to be the set of all ordered pairs (ab) where a comes from A and b comes from B. The definition of P depends on the elements of A and B but it does not depend on any other sets.

Category theory turns this inside-out. Operations such as taking products are not defined in terms of elements of objects. Category theory makes no use of elements or subobjects [1]. It defines things by how they act, not their inner workings. People often stress what category theory does not depend on, but they less often stress what it does depend on. The definition of the product of two objects in any category depends on all objects in that category: The definition of the product of objects A and B contains the phrase “such that for any other object X …” [More on categorical products].

The payoff for this inside-out approach to products is that you can say something simultaneously about everything that acts like a product, whether it’s products of sets, products of fields (i.e. that they don’t exist), products of groups, etc. You can’t say something valid across multiple categories if you depend on details unique to one categories.

This isn’t unique to products. Universal properties are everywhere. That is, you see definitions containing “such that for any other object X …” all the time. In this sense, category theory is extremely non-local. The definition of a widget often depends on all widgets.

There’s a symmetry here. Traditional definitions depend on the internal workings of objects, but only on the objects themselves. There are no third parties involved in the definition. Categorical definitions have zero dependence on internal workings, but depend on the behavior of everything in the category. There are an infinite number of third parties involved! [2] You can have a definition that requires complete internal knowledge but zero external knowledge, or a definition that requires zero internal knowledge and an infinite amount of external knowledge.

Related: Applied category theory

* * *

[1] Category theory does have notions analogous to elements and subsets, but they are defined the same way everything else is in category theory, in terms of objects and morphisms, not by appealing to the inner structure of objects.

[2] You can have a category with a finite number of objects, but usually categories are infinite. In fact, they are usually so large that they are “classes” of objects rather than sets.

Optimal team size

Kevlin Henney’s keynote at GOTO Copenhagen this year discussed how project time varies as a function of the number of people on the project. The most naive assumption is that the time is inversely proportional to the number of people. That is

t = W/n

where t is the calendar time to completion, W is a measure of how much work is to be done, and n is the number of people. This assumes everything on the project can be done in parallel. Nobody waits for anybody else.

The next refinement is to take into account the proportion of work that can be done in parallel. Call this p. Then we have

t = W[1 – p(n-1)/n].

If everything can be done in parallel, p = 1 and tW/t as before. But if nothing can be done in parallel, p= 0, and so tW. In other words, the total time is the same whether one person is on the project or more. This is essentially Amdahl’s law.

With the equation above, adding people never slows things down. And if p > 0, every addition person helps at least a little bit.

Next we add a term to account for communication cost. Assume communication costs are proportional to the number of communication paths, n(n – 1)/2. Call the proportionality constant k. Now we have

t = W[1 – p(n-1)/n + kn(n-1)/2].

If k is small but positive, then at first adding more people causes a project to complete sooner. But beyond some optimal team size, adding more people causes the project to take longer.

Of course none of this is exact. Project time estimation doesn’t follow any simple formula. Think of these equations more as rough guides or metaphors. It’s certainly true that beyond a certain size, adding more people to a project can slow the project down. Kevlin gave examples of projects that were put back on track by reducing the number of people working on them.

My quibble with the equation above is that I don’t think the cost of more people is primarily communication. Communication paths in a real project are not the simple trees of org charts, but neither are they complete graphs. And if the problem were simply communication, then improved communication would mitigate the cost of adding people to a project, though I imagine it hardly does.

I think the cost of adding people to a project has more to do with Parkinson’s Law which says that people make work for each other. (The aphorism form of Parkinson’s Law says that work expands to the time allowed. But the eponymous book explains why work expands, and it is in part because people make extra work for each other.)

Dust jacket of the book Parkinsons Law and Other Studies in Administration

I wrote about a similar theme in the blog post Maybe you only need it because you have it. Here’s the conclusion of that post:

Suppose a useless project adds staff. These staff need to be managed, so they hire a manager. Then they hire people for IT, accounting, marketing, etc. Eventually they have their own building. This building needs security, maintenance, and housekeeping. No one questions the need for the security guard, but the guard would not have been necessary without the original useless project.

When something seems absolutely necessary, maybe it’s only necessary because of something else that isn’t necessary.

Efficiency of C# on Linux

This week I attended Mads Torgersen’s talk Why you should take another look at C#. Afterward I asked him about the efficiency of C# on Linux. When I last looked into it, it wasn’t good. A few years ago I asked someone on my team to try running some C# software on Linux using Mono. The code worked correctly, but it ran several times slower than on Windows.

Mads said that now with .NET Core, C# code runs about as fast on Linux as Windows. Maybe a few percent slower on Linux. Scott Hanselman joined the conversation and explained that with .NET Core, the same code runs on every platform. The Mono project duplicated the much of the functionality of the .NET framework, but it was an entirely independent implementation and not as efficient.

I had assumed the difference was due to compiler optimizations or lack thereof, but Scott and Mads said that the difference was mostly the code implementation. There are some compiler optimizations that are better on the Windows side, and so C# might run a little faster on Windows, but the performance is essentially the same on both platforms.

I could recommend C# to a client running Linux if there’s a 5% performance penalty, but a 500% performance penalty was a show-stopper. Now I’d consider using C# on a project where I need more performance than Python or R, but wanted to use something easier to work with than C++.

Years ago I developed software with the Microsoft stack, but I moved away from Microsoft tools when I quit doing the kind of software development the tools are geared for. So I don’t write C# much any more. It’s been about a year since I had a project where I needed to write C# code. But I like the C# language. You can tell that a lot of thought has gone into the design, and the tool support is great. Now that the performance is better on Linux I’d consider using it for numerical simulations.

Mathematical modeling for medical devices

We’re about to see a lot of new, powerful, inexpensive medical devices come out. And to my surprise, I’ve contributed to a few of them.

Growing compute power and shrinking sensors open up possibilities we’re only beginning to explore. Even when the things we want to observe elude direct measurement, we may be able to infer them from other things that we can now measure accurately, inexpensively, and in high volume.

In order to infer what you’d like to measure from what you can measure, you need a mathematical model. Or if you’d like to make predictions about the future from data collected in the past, you need a model. And that’s where I come in. Several companies have hired me to help them create medical devices by working on mathematical models. These might be statistical models, differential equations, or a combination of the two. I can’t say much about the projects I’ve worked on, at least not yet. I hope that I’ll be able to say more once the products come to market.

I started my career doing mathematical modeling (partial differential equations) but wasn’t that interested in statistics or medical applications. Then through an unexpected turn of events, I ended up spending a dozen years working in the biostatistics department of the world’s largest cancer center.

After leaving MD Anderson and starting my consultancy, several companies have approached me for help with mathematical problems associated with their idea for a medical device. These are ideal projects because they combine my earlier experience in mathematical modeling with my more recent experience with medical applications.

If you have an idea for a medical device, or know someone who does, let’s talk. I’d like to help.

 

Publishable

For an article to be published, it has to be published somewhere. Each journal has a responsibility to select articles relevant to its readership. Articles that make new connections might be unpublishable because they don’t fit into a category. For example, I’ve seen papers rejected by theoretical journals for being too applied, and the same papers rejected by applied journals for being too theoretical.

“A bird may love a fish but where would they build a home together?” — Fiddler on the Roof

 

One of my favorite proofs: Lagrange multipliers

One of my lightbulb moments in college was when my professor, Jim Vick, explained the Lagrange multiplier theorem. The way I’d seen it stated in a calculus text gave me no feel for why it should be true, but his explanation made sense immediately.

Suppose f(x) is a function of several variables, i.e. x is a vector, and g(x) = c is a constraint. Then the Lagrange multiplier theorem says that at the maximum of f subject to the constraint g we have ∇f = λ ∇g.

Where does this mysterious λ come from? And why should the gradient of your objective function be related to the gradient of a constraint? These seem like two different things that shouldn’t even be comparable.

Here’s the geometric explanation. The set of points satisfying g(x) = c is a surface. And for any k, the set of points satisfying f(x) = k is also surface. Imagine k very large, larger than the maximum of f on the surface defined by g(x) = c. You could think of the surface g(x) = c being a balloon inside the larger balloon  f(x) = k.

Now gradually decrease k, like letting the air out of the outer balloon, until the surfaces g(x) = c and f(x) = k first touch. At that point, the two surfaces will be tangent, and so their normal vectors, given by their gradients, point in the same direction. That is, ∇f and ∇g are parallel, and so ∇f is some multiple of ∇g. Call that multiple λ.

I don’t know how well that explanation works when written down. But when I heard Jim Vick explain it, moving his hands in the air, it was an eye-opener.

This is not a rigorous proof, and it does not give the most general result possible, but it explains what’s going on. It’s something to keep in mind when reading proofs that are more rigorous or more general. As I comment on here,

Proofs serve two main purposes: to establish that a proposition is true, and to show why it is true.

The literally hand-wavy proof scores low on the former criterion and high on the latter.

***

Jim Vick was a great teacher. Some of us affectionately called him The Grinning Demon because he was always smiling, even while he gave devilishly hard homework. He was Dean of Natural Sciences when I took a couple classes from him. He later became Vice President for Student Affairs and kept teaching periodically. He has since retired but still teaches.

After taking his topology course, several of us asked him to teach a differential geometry course. He hesitated because it’s a challenge to put together an undergraduate differential geometry course. The usual development of differential geometry uses so much machinery that it’s typically a graduate-level course.

Vick found a book that starts by looking only at manifolds given by level sets of smooth functions, like the surfaces discussed above. Because these surfaces sit inside a Euclidean space, you can quickly get to some interesting geometric results using elementary methods. We eventually got to the more advanced methods, but by then we had experience in a more tangible setting. As Michael Atiyah said, abstraction should follow experience, not precede it.