You are here

Tensors: Geometry and Applications

J. M. Landsberg
Publisher: 
American Mathematical Society
Publication Date: 
2012
Number of Pages: 
439
Format: 
Hardcover
Series: 
Graduate Studies in Mathematics 128
Price: 
74.00
ISBN: 
9780821869079
Category: 
Textbook
[Reviewed by
Fernando Q. Gouvêa
, on
01/28/2012
]

Suppose we take a column vector v (i.e., an m×1 matrix) and multiply it by a row vector α (thought of as a 1×n matrix); the product will be an m×n matrix of rank one, each of whose columns is a multiple of v by the corresponding entry in α. If we take α to be one of the elementary row vectors εj (whose entries are all 0 except for a 1 in the j-th position) we get a matrix whose j-th column is v while all the other entries are 0. It’s clear that any m×n matrix can be obtained as a sum of n such products.

The fancy version of that paragraph goes like this. We take V = Rn and W = Rm. Represent their elements as columns. Then the elements of the dual space V* are represented by rows, and the product of α by v as above is the element α ⊗ v of the tensor product V* ⊗ W. The argument above shows that this is isomorphic to Hom(V,W), and shows further that every element of Hom(V,W) can be represented as the sum of at most n elementary tensors α ⊗ v. Each elementary tensor α ⊗ v is a matrix of rank 1, and it’s easy to prove, in fact, that the minimal number of terms in a representation of a matrix M as a sum of α ⊗ v is exactly the rank of M.

Now, at least when n = m, we can determine whether a matrix M has rank n by computing its determinant. In other words, there is a polynomial in n2 variables whose zero locus is exactly the subset of V* ⊗ W consisting of matrices of rank less than n. It follows that “most” matrices have rank n and that it is “easy” to decide when a matrix has the expected rank.

V* ⊗ W is, of course, only the simplest example of a tensor product. As it turns out, it is far from typical. For example, the rank of an element of V* ⊗ W is bounded by the dimensions of V and W and is equal to the larger of the two. The first statement is false for general tensors and it turns out to be quite hard to determine what the typical rank is.

Tensors: Geometry and Applications is about what happens in the general case: given an element φ of a tensor product A ⊗ B ⊗ C ⊗…⊗ Z, it can be represented as a sum of rank one tensors a ⊗ b ⊗ c ⊗…⊗ z. Define the rank of φ to be the minimal number of terms in a representation. We want to know something about that rank: what is it, typically? Is there a simple way to test whether φ has a given rank?

To see why that matters, consider multiplication of (say) 2×2 matrices. We can think of each matrix as an element of V = R4, so multiplication is a map from V × V to V. Since it is bilinear, it defines a homomorphism from V ⊗ V to V, and therefore, as above, an element of V* ⊗ V* ⊗ V. (Landsberg calls it M2,2,2.) Suppose we have a representation of M2,2,2 as a sum of r tensors a⊗ b⊗ ci. Then we can compute the product of two matrices using this representation, i.e., by doing r dot product computations followed by r additions. So the rank of M2,2,2 turns out to measure just how much work it is to multiply 2×2 matrices.

The definition of the product of matrices requires us to multiply each row of one matrix by each column of the other; it turns out that for 2×2 matrices we end up doing 8 multiplications and 4 additions. But the rank of M2,2,2 is actually 7, so using a minimal representation we can do the same computation with only 7 multiplications. That’s a small difference, but the same question for large n is very interesting: if we can show that the rank is smaller than expected, we might be able to vastly simplify the algorithm for multiplying matrices. In fact, the definition of matrix multiplication requires O(n3) multiplications, but the best algorithms do much better, the current record being about O(n2.4).

There are lots of other examples of problems related to decomposing tensors, which Landsberg discusses in the first chapter. They come from algebraic statistics, complexity theory, spectroscopy, signal analysis, and so on. So the problem of determining the rank of tensors and of effectively decomposing them is definitely important in applied mathematics.

How do we attack it? It turns out that, as in the case of matrices, there are (systems of) polynomial equations that can be used to bound the rank. This situates the problem within algebraic geometry: the tensors of rank r are (roughly) the points on some algebraic variety, which we can attempt to study.

Unfortunately, while we can prove such polynomial equations exist, we can’t always find them explicitly. There is also the “roughly” in the previous paragraph, which can already be seen in the case of matrices: determinant = 0 defines a subvariety of matrices of rank at most n – 1, but that subvariety will include matrices of smaller rank as well. (This leads to the definition of “border rank,” which is the right concept from the point of view of algebraic geometry, but of course the wrong one for practical applications!)

Things get difficult very quickly. We are led to study secant varieties of Segre varieties in projective space. Consideration of “symmetric tensors” leads to secant varieties of Veronese varieties. There’s lots of cool algebraic geometry to play with.

There’s also some representation theory, since the group GL(V) acts on the vector space V and therefore also on tensors built out of V. In addition, we are often interested in cases (isomorphic to) V ⊗ V ⊗…⊗ V, and then we also have a permutation action of the symmetric group.

Most readers will enjoy the preface and chapter 1, which set out the main problems and the motivation from applied mathematics. The preface includes some insightful comments on the tension between the cultures of mathematics and of the scientists and engineers who actually put the results to use. Some very pure mathematics is here put to work solving some very practical problems, which means that the clash of cultures is acute.

The next two chapters are “elementary” in the sense that they do not depend on algebraic geometry or representation theory. They include both elementary proofs of some results (when available) and statements of results to be proved later in the book. A reader who knows linear and multilinear algebra and wants to know more about these questions could read Part 1 with profit.

Part 2 is where the real work is done, with algebraic geometry and representation theory being the main tools. The text gets significantly denser. There is a lot of mathematics here, enough for a graduate course on this material. Part 3 returns to the applications and puts the theory to use. Part 4 is a kind of supplement that gives proofs that require more advanced techniques and discusses other advanced topics.

The exposition is terse, very much in the style of a graduate textbook. The reader must work through the book and become conversant with the subject. For example, the isomorphism between V* ⊗ W and Hom(V,W) is mentioned en passant on page 7 and then used throughout as a matter of course, with very little comment. Thus, this is probably not a book in which the non-specialist can browse.

The book is very heavy on notation (probably inevitably so); more than once I wished for an index of notations. There are many exercises, some of them with hints or solutions in an appendix.

The theory discussed by Landsberg has deep historical roots, as we can tell by his occasional comments that this or that result dates back to 19th century invariant theory. There are also results that come from what Landsberg sometimes refers to as “the tensor literature,” which seems to be mostly practical stuff by applied mathematicians. I would have been interested in a slightly deeper investigation of this. In particular, it would have been good to have some lessons about (and examples of) how to translate between the various languages in which the theory has been expressed.

I am no specialist on this subject, so I found Tensors difficult but fascinating. I would love to audit Landsberg’s graduate course.


Fernando Q. Gouvêa is Carter Professor of Mathematics at Colby College in Waterville, ME.