- Membership
- MAA Press
- Meetings
- Competitions
- Community
- Programs
- Students
- High School Teachers
- Faculty and Departments
- Underrepresented Groups
- MAA Awards
- MAA Grants

- News
- About MAA

Publisher:

Chapman and Hall/CRC

Publication Date:

2009

Number of Pages:

654

Format:

Hardcover

Price:

99.95

ISBN:

9781439800409

Category:

Textbook

[Reviewed by , on ]

Richard J. Wilders

12/28/2009

The introduction describes this text as suitable for a one or two-semester course at the sophomore-junior level. It is designed to introduce students to the notion of proofs. The author is thus seeking to provide a suitable text for the sort of course most of us teach under the title of linear algebra. As a result, this text is competing with several other very good texts.

In terms of physical appearance this text is not competitive. The illustrations are not up to the quality we now expect in a main line textbook. In addition, the type face used makes the portions involving vectors and matrices somewhat hard to read. The theorems and corollaries are shaded in gray; otherwise, there is no color. There are wide margins on every page resulting in a much thicker book than necessary. Often, these margins are used to insert historical or other interesting side notes — in the case of this book, they merely take up space. These are small issues, but I for one have found students respond very well to the historical vignettes that appear in most recent texts. The typeface issue is, to my mind, a serious one.

The key decision in writing such a text is when to begin the theoretical portion of the course — abstract vector spaces. The most common ordering begins with systems of linear equations and then proceeds to matrices, vectors in **R**^{2} and **R**^{3}, followed (at last) by a consideration of the abstract notion of a vector space. While this allows students a leisurely few weeks at the beginning it a,lso leads them to believe that there won’t be much of anything new in the course. In our term system (10 weeks) we often don’t get to the difficult parts of the course until it’s too late to drop — a classic bait and switch strategy. Starting with abstract vector spaces is also difficult, however. Students at this level are still tied to concrete examples — leading with the vector axioms won’t work for most of them. What then are we to do?

The present text begins with vectors in **R**^{n}, though most of the examples are from **R**^{2} and **R**^{3}. This seems a nice compromise — students are introduced to a (relatively) abstract notion early on but in the context of something (vectors) they met in Calculus. Vectors and scalar multiplication are nicely motivated, though not enough attention is paid to their use in physics. Indeed, the only mention is a curious statement on page 7: “…a vector is a quantity that has both magnitude (size) and direction, for example, velocity, acceleration and *other forces*.” (Emphasis mine). The first set of exercises contains some nice problems, but concludes with the definition of a convex set and of the convex hull of a set followed by a set of problems I would guess few sophomores would make much progress on.

We then proceed to the inner product on **R**^{n} and a proof of the Cauchy-Schwarz inequality. The exercises for this section are again a mixture of drill problems and very difficult problems. The vector cross product is defined in an exercise — no motivation is given for this except as a footnote referring students to a textbook on Advanced Engineering Mathematics.

The next two chapters cover matrices and eigenvalues respectively. There is more material on eigenvalues than is typical for a course like this, allowing instructors to tailor their courses to the needs of their students. As with the other chapters, there are not enough examples of basic ideas in the body of the text. There is only one worked out example of finding the eigenvalues of a matrix. A second example, of a matrix whose characteristic polynomial has a repeated root, merely lists two independent eigenvectors corresponding to that root. In my experience, students have trouble dealing with the underdetermined systems that result in these cases. The concept of multiplicity occurs several sections later.

Chapter 5 introduces the concept of an abstract vector space and provides a nice set of examples. We then meet the concept of subspace. The second example is a nice one, but the language used is truly strange (Example 5.2.2, page 355). We are to consider P_{3}[x], described as “the set of all polynomials of degree less than or equal to 3.” Here is the next sentence exactly as it appears: “Now consider P_{2}[x] = {a_{k}x^{k} + a_{k–1}x^{k–1} + …a_{1}x + a_{0} | a_{i} ∈ **R **for i=0,1,2,…k, k ≤ 2}.” One can only wonder what’s missing between 2 and k! Why not just describe P_{2}[x] as all polynomials of degree 2 or less?

The third example defines the notions of linear transformation, range, and kernel and then proves that the range and kernel are subspaces of their respective vector spaces. No examples of linear transformations are given, nor are we given any hint as to what they might be good for. Linear transformations are the subject of chapter 6 — I think this example would be better placed in chapter 6. These facts are mentioned in chapter 6: “In discussing the null space and the range it is important to recognize that these are subspaces, not just subsets.” (page 401) Students are not reminded that this was proved in Chapter 5.

As mentioned above, Chapter 6 discusses linear transformations. Included is a statement and proof of the fact that all real vector spaces of dimension n are isomorphic. I think this theorem would have more power for students if it was preceded by at least one example, such as P_{3}[x] being isomorphic to M_{2,2}, with the isomorphism is explicitly constructed. There are lots of good examples like this in exercises, but the theorem itself is not well motivated in the text proper.

This is one of several places where the author violates the approach he describes in the Introduction:

Motivating concrete example — analysis — general principle — additional concrete examples (page xiv)

The text concludes with chapters on Inner Product Spaces and Hermitian Matrices and Quadratic Forms. The chapter on inner product spaces opens with a brief discussion of complex vector spaces and then defines an abstract inner product — no motivating examples are given. Indeed, students are not reminded that the dot product they encountered earlier is indeed an inner product until the second set of examples which follow the definition. These examples include the integral from a to b of f(x)g(x) as an inner product on *C*[a,b], but no proof is given and it is only two pages later that an example of this inner product is worked out. Again, despite the author’s stated strategy, the sequence of events seems to be:

General principle — general examples — concrete examples

In summary, this text is not up to the standards required for a market as large as that for a linear algebra text. There are not enough computational, concrete examples to motivate the big ideas which tie this material together. There are lots of good homework problems, so this might well be a good source for exams or extra-credit problems or the like. As a text, I think there are better choices out there.

Richard Wilders is Marie and Bernice Gantzert Professor in the Liberal Arts and Sciences and Professor of Mathematics at North Central College. His primary areas of interest are the history and philosophy of mathematics and of science. He has been a member of the Illinois Section of the Mathematical Association of America for 30 years and is a recipient of its Distinguished Service Award.

**Vectors**

Vectors in R^{n}

The Inner Product and Norm

Spanning Sets

Linear Independence

Bases

Subspaces

Summary

**Systems of Equations**

The Geometry of Systems of Equations in R^{2} and R^{3}

Matrices and Echelon Form

Gaussian Elimination

Computational Considerations—Pivoting

Gauss–Jordan Elimination and Reduced Row Echelon Form

Ill-Conditioned Systems of Linear Equations

Rank and Nullity of a Matrix

Systems of *m* Linear Equations in* n* Unknowns

**Matrix Algebra**

Addition and Subtraction of Matrices

Matrix–Vector Multiplication

The Product of Two Matrices

Partitioned Matrices

Inverses of Matrices

Elementary Matrices

The LU Factorization

**Eigenvalues, Eigenvectors, and Diagonalization**

Determinants

Determinants and Geometry

The Manual Calculation of Determinants

Eigenvalues and Eigenvectors

Similar Matrices and Diagonalization

Algebraic and Geometric Multiplicities of Eigenvalues

The Diagonalization of Real Symmetric Matrices

The Cayley–Hamilton Theorem (a First Look)/the Minimal Polynomial

**Vector Spaces**

Vector Spaces

Subspaces

Linear Independence and the Span

Bases and Dimension

**Linear Transformations**

Linear Transformations

The Range and Null Space of a Linear Transformation

The Algebra of Linear Transformations

Matrix Representation of a Linear Transformation

Invertible Linear Transformations

Isomorphisms

Similarity

Similarity Invariants of Operators

**Inner Product Spaces**

Complex Vector Spaces

Inner Products

Orthogonality and Orthonormal Bases

The Gram–Schmidt Process

Unitary Matrices and Orthogonal Matrices

Schur Factorization and the Cayley–Hamilton Theorem

The QR Factorization and Applications

Orthogonal Complements

Projections

**Hermitian Matrices and Quadratic Forms**

Linear Functionals and the Adjoint of an Operator

Hermitian Matrices

Normal Matrices

Quadratic Forms

Singular Value Decomposition

The Polar Decomposition

**Appendix A: Basics of Set Theory**

**Appendix B: Summation and Product Notation **

**Appendix C: Mathematical Induction**

**Appendix D: Complex Numbers **

** **

**Answers/Hints to Odd-Numbered Problems**

** **

**Index**

- Log in to post comments