This is a textbook covering analysis on the real line and Euclidean space. It also includes a detailed treatment of Lebesgue integration and chapters on fundamental aspects of ordinary differential equations, Fourier series, and basic problems in partial differential equations.

The book is primarily aimed at advanced undergraduates, though the material in the latter part of the book is suitable for beginning graduate students.

Due to the large amount of material covered and the conventional treatment of the topics, this book would also serve as a good reference book on undergraduate analysis. A good comparable in terms of breadth of coverage is Apostol’s

*Mathematical Analysis.*

There is enough material in the book to fill three or four different courses at typical American universities. In describing the content of the book, I will give some examples (not an exhaustive list) of courses that could be suitably taught based on it.

The first seven chapters cover the typical material for an introductory course (or two) in single-variable analysis. It is perhaps a little less inviting than the popular book by Abbott,

*Understanding Analysis*, but it is generally more detailed and thorough, and it is much easier to handle than Rudin’s classic

*Principles of Mathematical Analysis*. The prerequisites for these chapters are one course each in single-variable differential and integral calculus and a course in proof-writing.

Chapters 8 to 13 cover analysis in \( \mathbb{R}^{n} \). Topics covered include: the structure of \( \mathbb{R}^{n} \); limits and continuity in \( \mathbb{R}^{n} \); abstract metric spaces; differentiation in \( \mathbb{R}^{n} \); Taylor’s theorem; Lagrange multipliers; and the inverse and implicit function theorems. Riemann integration in \( \mathbb{R}^{n} \) is covered thoroughly, including detailed treatments of Fubini’s theorem, the change of variables formula, and Lebesgue’s criterion for Riemann integrability. Surface integrals are treated in one section, but vector calculus, differential forms, and the general Stokes theorem do not appear. These chapters would be suitable for a course in multi-variable analysis (aka “rigorous advanced calculus”). Alternatively, I could imagine Chapter 8 (which covers the metric structure of \( \mathbb{R}^{n} \) and limits and continuity for functions on \( \mathbb{R}^{n} \)) being included with the first seven chapters in a two-course sequence in introductory analysis.

Chapters 14 on ordinary differential equations and Chapter 15 on the Dirichlet problem and Fourier series can be viewed as applications of the material from earlier chapters. Thus it would be quite natural to include one or both of them as enrichment in courses covering the earlier chapters. Alternatively, one could focus a course on the content of Chapters 14 and 15 and draw material from earlier chapters as background.

Chapters 16 and 17 cover abstract measure theory and Lebesgue integration, with Lebesgue measure on \( \mathbb{R} \) and \( \mathbb{R}^{n} \) as particularly important special cases. Chapter 18 covers abstract inner product spaces, Fourier series expansions, and the \( L^{2} \) space. The treatment is suitable for advanced undergraduates or first-year graduate students. Chapters 16, 17, and 18 (with some parts of other chapters for background) could form a reasonable first course in Lebesgue integration.

As mentioned before, the book covers a wide range of topics and does so in a standard way. However, there were a few pleasant

surprises:

- A vivid plausibility argument for the Schroeder-Bernstein Theorem (a detailed proof appears in an appendix).
- The Contraction Mapping Theorem makes an earlier-than-usual appearance in the setting of the real line (it is applied to treat Newton’s method) before it is introduced in the general setting of metric spaces. It is then used in the usual way to prove the Inverse Function Theorem and an existence and uniqueness theorem on initial value problems for systems of ordinary differential equations.
- The coverage of R n is quite thorough for a book of its kind. For example, Fourier expansions, the spectral theorem for real symmetric matrices, and matrix norms are all treated in detail.
- The Morse lemma for functions on \( \mathbb{R}^{n} \) is proved in detail as an application of the inverse function theorem.

The book contains plenty of exercises overall. Each of the 18 chapters has about 6 sections on average, and almost every section concludes with a sequence of 4 or 5 exercises, though there is quite a bit of variance in that number. The exercises are generally well-chosen and are a mix of extending theorems, providing alternative proofs, working out small details in proofs, verifying definitions in examples, constructing counterexamples, and computations. However, there are almost no hard exercises. The exercises generally range in difficulty from easy to moderate. The book contains a thorough and usable index. I found very few misprints, all of which were minor notational inconsistencies which should have no detrimental effect on the reader.

In closing, I think this book would be a good choice as a textbook for courses or self-directed study in single- or multi-variable analysis. It stands out from other books on these topics due to its detail, and its coverage of differential equations, applications of Fourier series, and Lebesgue integration. It would also serve as a handy reference book.

Kyle Hambrook is an Assistant Professor in the Department of Mathematics and Statistics at San Jose State University.