The simplest definition of a tensor is that it is a multilinear functional, i.e. a function that takes several vectors, returns a number, and is linear in each argument. Tensors over real vector spaces return real numbers, tensors over complex vector spaces return complex numbers, and you could work over other fields if you’d like.
A dot product is an example of a tensor. It takes two vectors and returns a number. And it’s linear in each argument. Suppose you have vectors u, v, and w, and a real number a. Then the dot product (u + v, w) equals (u, w) + (v, w) and (au, w) = a(u, w). This shows that dot product is linear in its first argument, and you can show similarly that it is linear in the second argument.
Determinants are also tensors. You can think of the determinant of an n by n matrix as a function of its n rows (or columns). This function is linear in each argument, so it is a tensor.
The introduction to this series mentioned the interpretation of tensors as a box of numbers: a matrix, a cube, etc. This is consistent with our definition because you can write a multilinear functional as a sum. For every vector that a tensor takes in, there is an index to sum over. A tensor taking n vectors as arguments can be written as n nested summations. You could think of the coefficients of this sum being spread out in space, each index corresponding to a dimension.
Tensor products are simple in this context as well. If you have a tensor S that takes m vectors at a time, and another tensor T that takes n vectors at a time, you can create a tensor that takes m + n vectors by sending the first m of them to S, the rest to T, and multiply the results. That’s the tensor product of S and T.
The discussion above makes tensors and tensor products still leaves a lot of questions unanswered. We haven’t considered the most general definition of tensor or tensor product. And we haven’t said anything about how tensors arise in application, what they have to do with geometry or changes of coordinate. I plan to address these issues in future posts. I also plan to write about other things in between posts on tensors.
Next post in series: Tensor products
How do you represent the dot product as a “box of numbers”?
@michael: the dot product corresponds to the identity matrix: a . b = (I a)’ b where ‘ = transpose. Or, p(a, b) = sum p_ij a_i b_j with p_ij = delta_ij = components of the identity matrix I = delta = diag( 1, 1, …, 1 ).
@John: I don’t think that this definition is a good definition, a tensor does not typically return a number, but rather another tensor (which may be a scalar or a vector or a higher rank tensor). And then it’s slightly (but not much) more complicated to define a tensor product.