Here are two common but unhelpful was to think about infinity.

- Infinity makes things harder.
- Infinity is a useless academic abstraction.

Neither of these is necessarily true. Problems are often formulated in terms of infinity to make things easier and to solve realistic problems. Infinity is usually a simplification. Think of infinity as “so big I don’t have to worry about how big it is.” Here are three examples.

In computer science, a **Turing machine** is an idealization of a computer. It is said to have an infinite tape that it reads back and forth. You could think of that as saying you can have as long a tape as you need.

Physics homework problems deal with **infinite capacitors**. Don’t think of that as meaning a capacitor bigger than the universe. Interpret the problem as saying that the width of the capacitor is so large relative to its thickness that you don’t have to worry about edge effects.

In calculus, you could think of **infinite series** as a sequence of finite approximations. A Taylor series, for example, is a compact way of expressing an unlimited sequence of approximations of a function. You can get as close to the function as you’d like by including enough terms in the sequence.

The infinite case often guides our thinking about the big case. Take the Taylor series example. A Taylor series isn’t just a formal series of polynomials. A Taylor series converges in some region. That says the infinite sequence of terms don’t behave arbitrarily. They get close to something as the terms increase. Knowing that the infinite sum converges tells you how to think about finite approximations you select from the series.

When I went to grad school, my intention was to study functional analysis. Essentially this means infinite dimensional vector spaces. That sounds terribly abstract and useless, but it can be quite practical. My background in functional analysis served me well when I went on to study partial differential equations and numerical analysis.

Infinite dimensional spaces guide our thinking about large finite dimensional spaces. If you want to solve a practical problem in high dimensions, the infinite dimensional case may be a better guide than ordinary three dimensional space. Continuity in infinite dimensional spaces requires structure that may not be apparent in low dimensions. Thinking about the infinite case may prepare you to exploit that structure in a large finite dimensional problem.

**Related posts**:

Alexandre Borovik discusses the idea of the infinite as an approximation to the very-large-finite in a post here.

Hi,

Infinity is a very useful term in computer science and also in software design.

Control software for Airplanes, Nuclear Reactors and any other control software, or operating systems are designed to run “infinitely long”. They should never reach a deadlock state, where they are locked and running in a livelock cycle or are completely blocked.

Those systems cannot be designed to run a finite time, but they need to run to infinity, except if they are teminated by the user. So from infinity to big…there is a huge step!!

Keep up your work!

Interesting take. My current PhD topic is about dynamical systems in infinite dimensional coupled lattices, thus infinity is in my everyday ‘chores’. But of course, infinite feels just like finite but big, once you get the know-about of the problem. And indeed, when I need to work in large dimensional problems, they don’t feel as threatening as before.

I also have done some work in the other side of the spectrum, with one-dimensional dynamics (complex dynamics, in particular), and curiously enough, wrote a post with some fractal images which I titled Infinite complexity. These images came as an algorithm for drawing some boundaries, for a poster I presented last year in Denmark.

Ruben

Terence Tao’s article might be helpful:

http://terrytao.wordpress.com/2007/05/23/soft-analysis-hard-analysis-and-the-finite-convergence-principle/

He calls dealing with infinity ‘soft analysis’ and explains how it can make things simple and how to convert between big and infinity.

I might go along with most of what you are saying provided we are only talking about one (uncompleted) infinity and not a whole stack of completed ones.

http://www.math.rutgers.edu/~zeilberg/Opinion111.html

is more my way of looking at things.

Infinite is also often simpler than finite in terms of Kolmogorov Complexity.

Here’s one of the comments from the discussion of this post on Reddit that was spot on. cgibbard says

Yes, that is a good example of what I was getting at.

John, could you explain your point that functional analysis is useful for PDEs and numerical analysis? It deals with mostly continuum-dimensional spaces (like function spaces), so I can’t see how it can be applied to finite-dimensional problems.

I graduated recently, my major was applied math and computer science, still I had comprehensive classes on real, complex and functional analysis, and I’m inclined to think of them as a waste of time.

Here’s an example. Suppose you’re solving a PDE using Fourier series. The exact solution lives in an infinite-dimensional space. The functions sin(n x) and cos(n x) form a basis for the space. When you truncate the series for form an approximation to the solution, you’re projecting the exact solution on to a finite dimensional subspace. Theorems from (infinite dimensional) Fourier series help you understand your finite dimensional approximation.

A better example might be finite element methods. The theory of finite elements closely tracks the analytic theory of PDEs. It’s relatively easy to go back and forth between the two, each one helping you understand the other.

Numerical linear algebra is very similar to functional analysis too, or at least there’s a lot of overlap. And numerical PDE methods reduce to numerical linear algebra.

Here’s an example. Suppose you’re solving a PDE using Fourier series. The exact solution lives in an infinite-dimensional space.

Okay, if you mean the things like Parseval’s identity, I see your point. But I was not used to think of it as a subject of functional analysis. I still don’t see any straightforward application for the stuff like Hahn–Banach theorem.

My favourite example of infinity making things easier is in the geometric series. Why calculate the NPV of a company using 5,000 terms when you could use two? NPV[now, ∞] − NPV[40 quarters, ∞] = NPV[now, 10 years from now].

On the question of whether infinite or big is easier, I think there’s some room for dispute. 1, 2, 3, …, ∞ definitely requires less mental space than Graham’s number. But Cantor’s highest ordinal in first-order Peano arithmetic, ε₀, which is another infinity, requires a lot more mental space. (perhaps still less than Graham’s number, but using higher Peano arithmetics I could come up with an ordinal that requires as many →’s to get to. And maybe if one includes the loops and the space required to understand a well-ordering, the mental size of ε₀ is the same as the mental size of Graham’s number)

We could include in “uses of infinity” any results that include any projective ∞ in the proof. Including the Cauchy distribution and any use of the Riemann sphere. I remember in

The Road to RealityPenrose talks about decomposing hyperfunctions into the northern and southern halves of the R sphere. (Not sure what the application is but … maybe signal processing or something?)Although I think physicists are naturally inclined to think that since the world is finite in size, infinity is merely a convenient approximation to reality. I see this kind of like the Cantor/Kronecker debate (apparently Cantor believed he was investigating messages from God since God ≈ ∞): viewing mathematics as reality itself rather than modelling tools.

Thinking from a psych. or econ. perspective, it’s generally assumed that “People don’t reduce to equations”. Often my response to such a criticism would involve an infinite-dimensional space (although I wouldn’t say so). The tools that come out of ∞-sized mathematics are often easier to shape into a human-like form.