Inner product from norm

If a vector space has an inner product, it has a norm: you can define the norm of a vector to be the square root of the inner product of the vector with itself.

||v|| \equiv \langle v, v \rangle^{1/2}

You can use the defining properties of an inner product to show that

\langle v, w \rangle = \frac{1}{2}\left( || v + w ||^2 - ||v||^2 - ||w||^2 \right )

This is a form of the so-called polarization identity. It implies that you can calculate inner products if you can compute norms.

So does this mean you can define an inner product on any space that has a norm?

No, it doesn’t work that way. The polarization identity says that if you have a norm that came from an inner product then you can recover that inner product from norms.

What would go wrong if tried to use the equation above to define an inner product on a space that doesn’t have one?

Take the plane R² with the max norm, i.e.

|| (x, y) || \equiv \max(|x|, |y|)

and define a function that takes two vectors and returns the right-side of the polarization identity.

f(v, w) = \frac{1}{2}\left( || v + w ||^2 - ||v||^2 - ||w||^2 \right )

This is a well-defined function, but it’s not an inner product. An inner product is bilinear, i.e. if you multiply one of the arguments by a constant, you multiply the inner product by the same constant.

To see that f is not an inner product, let v = (1, 0) and w = (0, 1). Then f(v, w) = -1/2, but f(2v, w) is also -1/2. Multiplying the first argument by 2 did not multiply the result by 2.

When we say that R² with the max norm doesn’t have an inner product, it’s not simply that we forgot to define one. We cannot define an inner product that is consistent with the norm structure.

Duals and double duals of Banach spaces

The canonical examples of natural and unnatural transformations come from linear algebra, namely the relation between a vector space and its first and second duals. We will look briefly at the finite dimensional case, then concentrate on the infinite dimensional case.

Two finite-dimensional vector spaces over the same field are isomorphic if and only if they have the same dimension.

For a finite dimensional space V, its dual space V* is defined to be the vector space of linear functionals on V, that is, the set of linear functions from V to the underlying field. The space V* has the same dimension as V, and so the two spaces are isomorphic. You can do the same thing again, taking the dual of the dual, to get V**. This also has the same dimension, and so V is isomorphic to V** as well as V*. However, V is naturally isomorphic to V** but not to V*. That is, the transformation from V to V* is not natural.

Some things in linear algebra are easier to see in infinite dimensions, i.e. in Banach spaces. Distinctions that seem pedantic in finite dimensions clearly matter in infinite dimensions.

The category of Banach spaces considers linear spaces and continuous linear transformations between them. In a finite dimensional Euclidean space, all linear transformations are continuous, but in infinite dimensions a linear transformation is not necessarily continuous.

The dual of a Banach space V is the space of continuous linear functions on V. Now we can see examples of where not only is V* not naturally isomorphic to V, it’s not isomorphic at all.

For any real p > 1, let q be the number such that 1/p  + 1/q = 1. The Banach space Lp is defined to be the set of (equivalence classes of) Lebesgue integrable functions f such that the integral of |f|p is finite. The dual space of Lp is Lq. If p does not equal 2, then these two spaces are different. (If p does equal 2, then so does qL2 is a Hilbert space and its dual is indeed the same space.)

In the finite dimensional case, a vector space V is isomorphic to its second dual V**. In general, V can be embedded into V**, but V** might be a larger space. The embedding of V in V** is natural, both in the intuitive sense and in the formal sense of natural transformations, discussed in the previous post. We can turn an element of V into a linear functional on linear functions on V as follows.

Let v be an element of V and let f be an element of V*. The action of v on f is simply fv. That is, v acts on linear functions by letting them act on it!

This shows that some elements of V** come from evaluation at elements of V, but there could be more. Returning to the example of Lebesgue spaces above, the dual of L1 is L, the space of essentially bounded functions. But the dual of L is larger than L1. That is, one way to construct a continuous linear functional on bounded functions is to multiply them by an absolutely integrable function and integrate. But there are other ways to construct linear functionals on L.

A Banach space V is reflexive if the natural embedding of V in V** is an isomorphism. For p > 1, the spaces Lp are reflexive.

However, R. C. James proved the surprising result that there are Banach spaces that are isomorphic to their second duals, but not naturally. That is, there are spaces V where V is isomorphic to V**, but not via the natural embedding; the natural embedding of V into V** is not an isomorphism.

Related: Applied functional analysis

Some ways linear algebra is different in infinite dimensions

There’s no notion of continuity in linear algebra per se. It’s not part of the definition of a vector space. But a finite dimensional vector space over the reals is isomorphic to a Euclidean space of the same dimension, and so we usually think of such spaces as Euclidean. (We’ll only going to consider real vector spaces in this post.) And there we have a notion of distance, a norm, and hence a topology and a way to say whether a function is continuous.

Continuity

In finite dimensional Euclidean space, linear functions are continuous. You can put a different norm on a Euclidean space than the one it naturally comes with, but all norms give rise to the same topology and hence the same continuous functions. (This is useful in numerical analysis where you’d like to look at a variety of norms. The norms give different analytical results, but they’re all topologically equivalent.)

In an infinite dimensional normed space, linear functions are not necessarily continuous. If the dimension of a space is only a trillion, all linear functions are continuous, but when you jump from high dimension to infinite dimension, you can have discontinuous linear functions. But if you look at this more carefully, there isn’t a really sudden change.

If a linear function is discontinuous, its finite dimensional approximations are continuous, but the degree of continuity is degrading as dimension increases. For example, suppose a linear function stretches the nth basis vector by a factor of n. The bigger n gets, the more the function stretches in the nth dimension. As long as n is bounded, this is continuous, but in a sense it is less continuous as n increases. The fact that the infinite dimensional version is discontinuous tells you that the finite dimensional versions, while technically continuous, scale poorly with dimension. (See practical continuity for more discussion along these lines.)

Completeness

A Banach space is a complete normed linear space. Finite dimensional normed spaces are always complete (i.e. every sequence in the space converges to a point in the space) but this might not happen in infinite dimensions.

Duals and double duals

In basic linear algebra, the dual of a vector space V is the space of linear functionals on V, i.e. the set of linear maps from V to the reals. This space is denoted V*. If V has dimension nV* has dimension n, and all n-dimensional spaces are isomorphic, so the distinction between a space and its dual seems pedantic. But in general a Banach space and its dual are not isomorphic and so its easier to tell them apart.

The second dual of a vector space, V** is the dual of the dual space. In finite dimensional spaces, V** is naturally isomorphic to V. In Banach spaces, V is isomorphic to a subset of V**. And even when V is isomorphic to V**, it might not be naturally isomorphic to V**.  (Here “natural” means natural in the category theory sense of natural transformations.)

Related

We got the definition wrong

When I was in grad school, I had a course in Banach spaces with Haskell Rosenthal. One day he said “We got the definition wrong.” It took a while to understand what he meant.

There’s nothing logically inconsistent about the definition of Banach spaces. What I believe he meant is that the definition is too broad to permit nice classification theorems.

I had intended to specialize in functional analysis in grad school, but my impression after taking that course was that researchers in the field, at least locally, were only interested in questions of the form “Does every Banach space have the property …” In my mind, this translated to “Can you construct a space so pathological that it lacks a property enjoyed by every space that anyone cares about?” This was not for me.

I ended up studying differential equations. I found it more interesting to use Banach spaces to prove theorems about PDEs than to study them for their own sake. From my perspective there was nothing wrong with their definition.

Related posts