Duals and double duals of Banach spaces

The canonical examples of natural and unnatural transformations come from linear algebra, namely the relation between a vector space and its first and second duals. We will look briefly at the finite dimensional case, then concentrate on the infinite dimensional case.

Two finite-dimensional vector spaces over the same field are isomorphic if and only if they have the same dimension.

For a finite dimensional space V, its dual space V* is defined to be the vector space of linear functionals on V, that is, the set of linear functions from V to the underlying field. The space V* has the same dimension as V, and so the two spaces are isomorphic. You can do the same thing again, taking the dual of the dual, to get V**. This also has the same dimension, and so V is isomorphic to V** as well as V*. However, V is naturally isomorphic to V** but not to V*. That is, the transformation from V to V* is not natural.

Some things in linear algebra are easier to see in infinite dimensions, i.e. in Banach spaces. Distinctions that seem pedantic in finite dimensions clearly matter in infinite dimensions.

The category of Banach spaces considers linear spaces and continuous linear transformations between them. In a finite dimensional Euclidean space, all linear transformations are continuous, but in infinite dimensions a linear transformation is not necessarily continuous.

The dual of a Banach space V is the space of continuous linear functions on V. Now we can see examples of where not only is V* not naturally isomorphic to V, it’s not isomorphic at all.

For any real p > 1, let q be the number such that 1/p  + 1/q = 1. The Banach space Lp is defined to be the set of (equivalence classes of) Lebesgue integrable functions f such that the integral of |f|p is finite. The dual space of Lp is Lq. If p does not equal 2, then these two spaces are different. (If p does equal 2, then so does qL2 is a Hilbert space and its dual is indeed the same space.)

In the finite dimensional case, a vector space V is isomorphic to its second dual V**. In general, V can be embedded into V**, but V** might be a larger space. The embedding of V in V** is natural, both in the intuitive sense and in the formal sense of natural transformations, discussed in the previous post. We can turn an element of V into a linear functional on linear functions on V as follows.

Let v be an element of V and let f be an element of V*. The action of v on f is simply fv. That is, v acts on linear functions by letting them act on it!

This shows that some elements of V** come from evaluation at elements of V, but there could be more. Returning to the example of Lebesgue spaces above, the dual of L1 is L, the space of essentially bounded functions. But the dual of L is larger than L1. That is, one way to construct a continuous linear functional on bounded functions is to multiply them by an absolutely integrable function and integrate. But there are other ways to construct linear functionals on L.

A Banach space V is reflexive if the natural embedding of V in V** is an isomorphism. For p > 1, the spaces Lp are reflexive.

However, R. C. James proved the surprising result that there are Banach spaces that are isomorphic to their second duals, but not naturally. That is, there are spaces V where V is isomorphic to V**, but not via the natural embedding; the natural embedding of V into V** is not an isomorphism.

Related: Applied functional analysis

Natural transformations

The ladder of abstractions in category theory starts with categories, then functors, then natural transformations. Unfortunately, natural transformations don’t seem very natural when you first see the definition. This is ironic since the original motivation for developing category theory was to formalize the intuitive notion of a transformation being “natural.” Historically, functors were defined in order to define natural transformations, and categories were defined in order to define functors, just the opposite of the order in which they are introduced now.

A category is a collection of objects and arrows between objects. Usually these “arrows” are functions, but in general they don’t have to be.

A functor maps a category to another category. Since a category consists of objects and arrows, a functor maps objects to objects and arrows to arrows.

A natural transformation maps functors to functors. Sounds reasonable, but what does that mean?

You can think of a functor as a way to create a picture of one category inside another. Suppose you have some category and pick out two objects in that category, A and B, and suppose there is an arrow f between A and B. Then a functor F would take A and B and give you objects FA and FB in another category, and an arrow Ff between FA and FB. You could do the same with another functor G. So the objects A and B and the arrow between them in the first category have counterparts under the functors F and G in the new category as in the two diagrams below.

A natural transformation α between F and G is something that connects these two diagrams into one diagram that commutes.

The natural transformation α is a collection of arrows in the new category, one for every object in the original category. So we have an arrow αA for the object A and another arrow αB for the object B. These arrows are called the components of α at A and B respectively.

Note that the components of α depend on the objects A and B but not on the arrow f. If f represents any other arrow from A to B in the original category, the same arrows αA and αB fill in the diagram.

Natural transformations are meant to capture the idea that a transformation is “natural” in the sense of not depending on any arbitrary choices. If a transformation does depend on arbitrary choices, the arrows αA and αB would not be reusable but would have to change when f changes.

The next post will discuss the canonical examples of natural and unnatural transformations.

Related: Applied category theory

Unnatural language processing

Japanese Russian dictionary

Larry Wall, creator of the Perl programming language, created a custom degree plan in college, an interdisciplinary course of study in natural and artificial languages, i.e. linguistics and programming languages. Many of the features of Perl were designed as an attempt to apply natural language principles to the design of an artificial language.

I’ve been thinking of a different connection between natural and artificial languages, namely using natural language processing (NLP) to reverse engineer source code.

The source code of computer program is text, but not a text. That is, it consists of plain text files, but it’s not a text in the sense that Paradise Lost or an email is a text. The most efficient way to parse a programming language is as a programming language. Treating it as an English text will loose vital structure, and wrongly try to impose a foreign structure.

But what if you have two computer programs? That’s the problem I’ve been thinking about. I have code in two very different programming languages, and I’d like to know how functions in one code base relate to those in the other. The connections are not ones that a compiler could find. The connections are more psychological than algorithmic. I’d like to reverse engineer, for example, which function in language A a developer had in mind when he wrote a function in language B.

Both code bases are in programming language, but the function names are approximately natural language. If a pair of functions have the same name in both languages, and that name is not generic, then there’s a good chance they’re related. And if the names are similar, maybe they’re related.

I’ve done this sort of thing informally forever. I imagine most programmers do something like this from time to time. But only recently have I needed to do this on such a large scale that proceeding informally was not an option. I wrote a script to automate some of the work by looking for fuzzy matches between function names in both languages. This was far from perfect, but it reduced the amount of sleuthing necessary to line up the two sets of source code.

Around a year ago I had to infer which parts of an old Fortran program corresponded to different functions in a Python program. I also had to infer how some poorly written articles mapped to either set of source code. I did all this informally, but I wonder now whether NLP might have sped up my detective work.

Another situation where natural language processing could be helpful in software engineering is determining code authorship. Again this is something most programmers have probably done informally, saying things like “I bet Bill wrote this part of the code because it looks like his style” or “Looks like Pat left her fingerprints here.” This could be formalized using NLP techniques, and I imagine it has been. Just as Frederick Mosteller and colleagues did a statistical analysis of The Federalist Papers to determine who wrote which paper, I’m sure there have been similar analyses to try to find out who wrote what code, say for legal reasons.

Maybe this already has a name, but I like “unnatural language processing” for the application of natural language processing to unnatural (i.e. programming) languages. I’ve done a lot of ad hoc unnatural language processing, and I’m curious how much of it I could automate in the future.

How areas of math are connected

In my previous post, I discussed how number theory and topology relate to other areas of math. Part of that was to show a couple diagrams from  Jean Dieudonné’s book Panorama of Pure Mathematics, as seen by N. Bourbaki. That book has only small star-shaped diagrams considering one area of math at a time. I’ve created a diagram that pastes these local views into one grand diagram. Along the way I’ve done a little editing because the original diagrams were not entirely consistent.

Here’s a condensed view of the graph. You can find the full image here.

The graph is so dense that it’s hard to tell which areas have the most or least connections. Here are some tables to clarify that. First, counting how many areas an particular area contributes to, i.e. number of outgoing arrows.

|-------------------------------------+---------------|
| Area                                | Contributions |
|-------------------------------------+---------------|
| Homological algebra                 |            12 |
| Lie groups                          |            11 |
| Algebraic and differential topology |            10 |
| Categories and sheaves              |             9 |
| Commutative algebra                 |             9 |
| Commutative harmonic analysis       |             9 |
| Algebraic geometry                  |             8 |
| Differential geometry and manifolds |             8 |
| Integration                         |             8 |
| Partial differential equations      |             8 |
| General algebra                     |             7 |
| Noncommutative harmonic analysis    |             6 |
| Ordinary differential equations     |             6 |
| Spectral theory of operators        |             6 |
| Analytic geometry                   |             5 |
| Automorphic and modular forms       |             5 |
| Classical analysis                  |             5 |
| Mathematical logic                  |             5 |
| Abstract groups                     |             4 |
| Ergodic theory                      |             4 |
| Probability theory                  |             4 |
| Topological vector spaces           |             4 |
| General topology                    |             3 |
| Number theory                       |             3 |
| Von Neumann algebras                |             2 |
| Set theory                          |             1 |
|-------------------------------------+---------------|

Next, counting the sources each area draws on, i.e. counting incoming arrows.

|-------------------------------------+---------|
| Area                                | Sources |
|-------------------------------------+---------|
| Algebraic geometry                  |      13 |
| Number theory                       |      12 |
| Lie groups                          |      11 |
| Noncommutative harmonic analysis    |      11 |
| Algebraic and differential topology |      10 |
| Analytic geometry                   |      10 |
| Automorphic and modular forms       |      10 |
| Ordinary differential equations     |      10 |
| Ergodic theory                      |       9 |
| Partial differential equations      |       9 |
| Abstract groups                     |       8 |
| Differential geometry and manifolds |       8 |
| Commutative algebra                 |       6 |
| Commutative harmonic analysis       |       6 |
| Probability theory                  |       5 |
| Categories and sheaves              |       4 |
| Homological algebra                 |       4 |
| Spectral theory of operators        |       4 |
| Von Neumann algebras                |       4 |
| General algebra                     |       2 |
| Mathematical logic                  |       1 |
| Set theory                          |       1 |
| Classical analysis                  |       0 |
| General topology                    |       0 |
| Integration                         |       0 |
| Topological vector spaces           |       0 |
|-------------------------------------+---------|

Finally, connectedness, counting incoming and outgoing arrows.

|-------------------------------------+-------------|
| Area                                | Connections |
|-------------------------------------+-------------|
| Lie groups                          |          22 |
| Algebraic geometry                  |          21 |
| Algebraic and differential topology |          20 |
| Noncommutative harmonic analysis    |          17 |
| Partial differential equations      |          17 |
| Differential geometry and manifolds |          16 |
| Homological algebra                 |          16 |
| Ordinary differential equations     |          16 |
| Analytic geometry                   |          15 |
| Automorphic and modular forms       |          15 |
| Commutative algebra                 |          15 |
| Commutative harmonic analysis       |          15 |
| Number theory                       |          15 |
| Categories and sheaves              |          13 |
| Ergodic theory                      |          13 |
| Abstract groups                     |          12 |
| General algebra                     |          10 |
| Spectral theory of operators        |          10 |
| Probability theory                  |           9 |
| Integration                         |           8 |
| Mathematical logic                  |           6 |
| Von Neumann algebras                |           6 |
| Classical analysis                  |           5 |
| Topological vector spaces           |           4 |
| General topology                    |           3 |
| Set theory                          |           2 |
|-------------------------------------+-------------|

There are some real quirks here. The most foundational areas get short shrift. Set theory contributes to only one area of math?! Topological vector spaces don’t depend on anything, not even topology?!

I suspect Dieudonné had in mind fairly high-level contributions. Topological vector spaces, for example, obviously depend on topology, but not deeply. You could do research in the area while seldom drawing on more than an introductory topology course. Elementary logic and set theory are used everywhere, but most mathematicians have no need for advanced logic or set theory.

More math diagrams:

Mathematical balance of trade

Areas of math all draw on and contribute to each other. But there’s a sort of trade imbalance between areas. Some, like analytic number theory, are net importers. Others, like topology, are net exporters.

Analytic number theory uses the tools of analysis, especially complex analysis, to prove theorems about integers. The first time you see this it’s quite a surprise. But then you might expect that since analysis contributes to number theory, then number theory must contribute to analysis. But it doesn’t much.

Topology imports ideas from algebra. But it exports more than in imports, to algebra and to other areas. Topology started as a generalization of geometry. Along the way it developed ideas that extend far beyond geometry. For example, computer science, which ostensibly has nothing to do with whether you can continuously deform one shape into another, uses ideas from category theory that were developed initially for topology.

Here’s how Jean Dieudonné saw things. The following are my reconstructions of a couple diagrams from his book Panorama of Pure Mathematics, as seen by N. Bourbaki. An arrow from A to B means that A contributes to B, or B uses A.

For number theory, some of Dieudonné’s arrows go both ways, some only go into number theory. No arrows go only outward from number theory.

With topology, however, there’s a net flux out arrows going outward.

These diagrams are highly subjective. There’s plenty of room for disagreement. Dieudonné wrote his book 35 years ago, so you might argue that they were accurate at the time but need to be updated. In any case, the diagrams are interesting.

Update: See the next post of a larger diagram, sewing together little diagrams like the ones above.

Improving on the Unix shell

Yesterday I ran across Askar Safin’s blog post The Collapse of the UNIX Philosophy. Two quotes from the post stood out. One was from Rob Pike about the Unix ideal of little tools that each do one job:

Those days are dead and gone and the eulogy was delivered by Perl.

The other was a line from James Hague:

… if you romanticize Unix, if you view it as a thing of perfection, then you lose your ability to imagine better alternatives and become blind to potentially dramatic shifts in thinking.

This brings up something I’ve long wondered about: What did the Unix shell get right that has made it so hard to improve on? It has some truly awful quirks, and yet people keep coming back to it. Alternatives that seem more rational don’t work so well in practice. Maybe it’s just inertia, but I don’t think so. There are other technologies from the 1970’s that had inertia behind them but have been replaced. The Unix shell got something so right that it makes it worth tolerating the flaws. Maybe some of the flaws aren’t even flaws but features that serve some purpose that isn’t obvious.

(By the way, when I say “the Unix shell” I have in mind similar environments as well, such as the Windows command line.)

On a related note, I’ve wondered why programming languages and shells work so differently. We want different things from a programming language and from a shell or REPL. Attempts to bring a programming language and shell closer together sound great, but they inevitably run into obstacles. At some point, we have different expectations of languages and shells and don’t want the two to be too similar.

Anthony Scopatz and I discussed this in an interview a while back in the context of xonsh, “a Python-powered, cross-platform, Unix-gazing shell language and command prompt.” While writing this post I went back to reread Anthony’s comments and appreciate them more now than I did then.

Maybe the Unix shell is near a local optimum. It’s hard to make much improvement without making big changes. As Anthony said, “you quickly end up where many traditional computer science people are not willing to go.”

Related postWhat’s your backplane?

Numerically integrating periodic functions

The trapezoid rule is the most obvious numerical integration technique. It comes directly from the definition of a definite integral, just a Riemann sum.

It’s a very crude technique in general; you can get much more accuracy with the same number of function evaluations by using a more sophisticated method. But for smooth periodic functions, the trapezoid rule works astonishingly well.

This post will look at two similar functions. The trapezoid rule will be very accurate for one but not for the other. The first function is

g(x) = exp( cos(x) ).

The second function, h(x) replaces the cosine with its Taylor approximation 1 – x2/2. That is,

h(x) = exp(1 – x2/2 ).

The graph below shows both functions.

Both functions are perfectly smooth. The function g is naturally periodic with period 2π. The function h could be modified to be a periodic function with the same period since h(-π) = h(π).

But the periodic extension of h is not smooth. It’s continuous, but it has a kink at odd multiples of π. The derivative is not continuous at these points. Here’s a close-up to show the kink.

Now suppose we want to integrate both functions from -π to π. Over that range both functions are smooth. But the behavior of h “off stage” effects the efficiency of the trapezoid rule. Making h periodic by pasting copies together that don’t match up smoothly does not make it act like a smooth periodic function as far as integration is concerned.

Here’s the error in the numerical integration using 2, 3, 4, …, 10 integration points.

The integration error for both functions decreases rapidly as we go from 2 to 5 integration points. And in fact the integration error for h is slightly less than that for g with 5 integration points. But the convergence for h practically stops at that point compared to g where the integration error decreases exponentially. Using only 10 integration points, the error has dropped to approximately 7×10-8 while the error for h is five orders of magnitude larger.

Related: Numerical integration consulting

Are polar coordinates backward?

I’d never given any thought to the order of polar coordinates until yesterday. They’re written (r, θ). That’s just how it is. But a friend pointed out two reasons why this bothers him.

First, r is typically a function of θ, just as y is typically a function of x. But in rectangular coordinates, the independent variable is the first element of a pair while in polar coordinates it is the second element.

Second, if you’re going to walk a mile northwest, how do you proceed? You first face northwest, then you walk for a mile. You don’t walk a mile to the east and then walk 135° counterclockwise along an arc centered where you started. That is, you set your θ first, then increase your r.

The (r, θ) convention isn’t going to change. Maybe the only take-away from this discussion is to appreciate someone’s confusion who sees polar coordinates for the first time and wants to reverse their order.

Related post: Why use j for imaginary unit

(I don’t use j for imaginary unit, except when writing Python code. The i notation is too firmly engraved in my brain. But I understand why j has advantages.)

Function on cover of Abramowitz & Stegun

Someone mailed me this afternoon asking if I knew what function was graphed on the cover of Abramowitz and Stegun’s famous Handbook of Mathematical Functions.

Here’s a close-up of the graph from a photo of my copy of A&S.

This was my reply.

It looks like a complex function of a complex variable. I assume the height is the magnitude and the markings on the graph are the phase. That would make it an elliptic function because it’s periodic in two directions.

It has one pole and one zero in each period. I think elliptic functions are determined, up to a constant, by their periods, zeros, and poles, so it should be possible to identify the function.

In fact, I expect it’s the Weierstrass p function. More properly, the Weierstrass ℘ function, sometimes called Weierstass’ elliptic function. (Some readers will have a font installed that will properly render ℘ and some not. More on the symbol ℘ here.)

Related posts:

Bessel series for a constant

Fourier series express functions as a sum of sines and cosines of different frequencies. Bessel series are analogous, expressing functions as a sum of Bessel functions of different orders.

Fourier series arise naturally when working in rectangular coordinates. Bessel series arise naturally when working in polar coordinates.

The Fourier series for a constant is trivial. You can think of a constant as a cosine with frequency zero.

The Bessel series for a constant is not as simple, but more interesting. Here we have

1 = J_0(x) + 2J_2(x) + 2J_4(x) + 2J_6(x) + \cdots

Since

J_{-n}(x) = (-1)^n J_n(x)

we can write the series above more symmetrically as

1 = \sum_{n=-\infty}^\infty J_{2n}(x)

Related posts:

SHA1 is no longer recommended, but hardly a failure

The web is all abuzz about how SHA1 is “broken”, “a failure,” “obsolete”, etc.

It is supposed to be computationally impractical to create two documents that have the same secure hash code, and yet Google has demonstrated that they have done just that for the SHA1 algorithm.

I’d like to make two simple observations to put this in perspective.

This is not a surprise. Cryptography experts have suspected since 2005 that SHA1 was vulnerable and recommended using other algorithms. The security community has been gleeful about Google’s announcement. They feel vindicated for telling people for years not to use SHA1

This took a lot of work, both in terms of research and computing. Crypto researchers have been trying to break SHA1 for 22 years. And according to their announcement, these are the resources Google had to use to break SHA1:

  • Nine quintillion (9,223,372,036,854,775,808) SHA1 computations in total
  • 6,500 years of CPU computation to complete the attack first phase
  • 110 years of GPU computation to complete the second phase

While SHA1 is no longer recommended, it’s hardly a failure. I don’t imagine it would take 22 years of research and millions of CPU hours to break into my car.

I’m not saying people should use SHA1. Just a few weeks ago I advised a client not to use SHA1. Why not use a better algorithm (or at least what is currently widely believed to be a better algorithm) like SHA3? But I am saying it’s easy to exaggerate what it means to say SHA1 has failed.

 

Assignment complete, twenty years later

In one section of his book The Great Good Thing, novelist Andrew Klavan describes how he bluffed his way through high school and college, not reading anything he was assigned. He doesn’t say what he majored in, but apparently he got an English degree without reading a book. He only tells of one occasion where a professor called his bluff.

Even though he saw no value in the books he was assigned, he bought and saved every one of them. Then sometime near the end of college he began to read and enjoy the books he hadn’t touched.

I wanted to read their works now, all of them, and so I began. After I graduated, after Ellen and I moved together to New York, I piled the books I had bought in college in a little forest of stacks around my tattered wing chair. And I read them. Slowly, because I read slowly, but every day, for hours, in great chunks. I pledged to myself I would never again pretend to have read a book I hadn’t or fake my way through a literary conversation or make learned reference on the page to something I didn’t really know. I made reading part of my daily discipline, part of my workday, no matter what. Sometimes, when I had to put in long hours to make a living, it was a real slog. …

It took me twenty years. In twenty years, I cleared those stacks of books away. I read every book I had bought in college, cover to cover. I read many of the other books by the authors of those books and many of the books those authors read and many of the books by the authors of those books too.

There came a day when I was in my early forties … when it occurred to me that I had done what I set out to do. …

Against all odds, I had managed to get an education.

 

Microresumés

I posted a couple things on Twitter today about micro-resumés. First, here’s how I’d summarize my work in a tweet.

(The formatting is a little off above. It’s leaving out a couple line breaks at the end that were in the original tweet.)

That’s not a bad summary. I’ve worked in applied math, software development, and statistics. Now I consult in those areas.

Next, I did the same for Frank Sinatra.

This one’s kinda obscure. It’s a reference to the title cut from his album That’s Life.

I’ve been a puppet, a pauper, a pirate
A poet, a pawn and a king.
I’ve been up and down and over and out
And I know one thing.
Each time I find myself flat on my face
I pick myself up and get back in the race.