I’ve written before about how the word “tensor” is applied to several different things, and that the connection between them isn’t immediately obvious. I thought about writing more about that some day, but I recently became aware of a paper that does this more thoroughly than I ever could.

The paper looks at three notions of a tensor

- a multi-indexed object that satisfies certain transformation rules
- a multilinear map
- an element of a tensor product of vector spaces

and explores (for 210 pages!) how they are related. The length comes primarily from the number of examples. The author is pulling on a thread that runs throughout a lot of mathematics.

Here’s the paper:

Lek-Heng Lim. Tensors in computations. Acta Numerica (2021), pp. 1–210. doi:10.1017/S0962492921000076. Available here.

I’m tasked with understanding tensors well enough to explain them to undergrads who are absolute linear algebra novices, and thought to give this a look. On page 3, Lim writes, “The article is written with accessibility and simplicity in mind.” I was out of breath by the time I got to this reassurance (?). Looks like I have my summer reading cut out for me. I am reminded of something John von Neumann said, “…in mathematics you don’t understand things. You just get used to them.”

Thanks for the pointer, I would never have found this otherwise.

Mike’s comment is utterly on point.

This paper is a great find — and a great share.

I wonder if it’s available non-paywalled, and if he covers practical examples especially decomposability (or the lack thereof) which pops up in practice (e.g. ML).

(followup)

Oh my gosh you linked the whole paper — bless your hands!!