OK, let’s get two things straight before I review this book: First of all, I am not probability theory’s biggest fan. Don’t get me wrong — I was originally trained as chemist, so I certainly recognize its enormous importance in the natural and social sciences. Random variables and probability distributions have become all the rage in financial engineering and mathematical finance as well. We all know someone that before their 27th birthday was offered a six-figure salary by some humongous corporation to play with stochastic functions in some nebulous way and make them a few more million while telling them whom to fire at the bottom.

Yes, “quants” and financial mathematicians are now very sexy on Wall Street and its brethren boulevards; they have bright futures indeed. These disciplines have traditionally required sophisticated mathematical training in measure theory, harmonic analysis and stochastic processes, skills far better suited to physicists and mathematicians than to “suits” who wouldn’t know — or care to know — Brownian motion from a bag of brownies from Dunkin’ Donuts.

Ordinarily this would be a Good Thing indeed for our discipline, given the cold reality of the perceived worthlessness of educated people in materialistic, morally bankrupt, and ultra-pragmatic America at the dawn of the 21^{st} century and the lucrative career options it opens for newly minted PhDs in mathematics. (To actually be able to get rich doing mathematics — 20 years ago, who’d a thunk it?)

I’m an old fashioned guy mathematically. I believe in learning measure theory before Lebesgue integration, Lebesgue integration before advanced probability and that before any of the former’s high tech applications. I learned measure and general integration theory in a murderous one-semester graduate course taught by Queens College’s now retired guru of real analysis, Gerald Itzkowitz. Brutal, but one of the most enlightening courses I’ve ever had the privilege of being part of. In an ideal world, all those who desire to enter the new frontier of financial mathematics would take many such courses and become skilled theoretical analysts before doing so. Doing it any other way strikes me as somehow dishonest and fraudulent.

Of course, that’s not realistic. This being America, the sadly inevitable has occurred: Smelling money, the less-than-mathematically passionate students have demanded courses in academia that teach them what they need to know to take up these new careers — and nothing else. (In my experience, such people rarely *ask* for anything they want — they either demand it or take it when no one’s looking.) Universities — also smelling money — have spent a lot of effort and time attempting to design such courses. This has lead to a host of textbooks in the last decade attempting to teach the high-level mathematics that is needed for the trendy new careers. Predictably, most of these are “cookbooks,” presenting the tools of measure theory and distributions as mindless algorithms without much in the way of explanation. As a realist, I have to accept the inevitability of such texts and courses.

Some mathematicians have attempted to craft works that try and satisfy this market intelligently — that is, with texts and corresponding courses that accept a bare minimum of preparation and present advanced probability on that basis. *A Second Course in Probability* by Sheldon M. Ross and Erol A. Peköz is such an attempt. Presuming knowledge of basic calculus, Ross and Peköz develop all the basics of advanced probability and stochastic processes.

To me, this is tantamount to trying to do *King Kong* without the ape. But two things became clear to me as I read this book: Ross and Peköz are determined to write such a book without making it a brainless cookbook, and they have given the question of how to do it a *lot* of careful thought. The resulting text is odd, to say the least, but effective.

Chapter 1 begins with a dramatic demonstration of just how odd this book is going to be from the traditionalist standpoint. It begins with a clever example of a non-measurable set on the probability space formed by the unit circle of radius 1 (which, of course, is equivalent to the closed interval [0, 1]).

It’s worth looking at this example in detail since it sets up the authors’ overall game plan for the book. The example asks how to measure the probability of choosing uniformly at random the head of a family as a function of *x*, where *x* is a point on the unit circle. Two points on the edge of circle are considered to be in the same family in this example if and only if you can go from one point to the other by taking steps of length one unit around the edge of the circle. By this the authors mean each step you take moves you an angle of exactly one radian around the circle and you are allowed to keep looping around the circle in either direction.. It’s not hard to show from the standpoint of the usual in-depth treatments of measure theory that this is in fact a variation of the Vitali nonmeasurable set. Since this set is not Lebesgue measurable, it follows almost trivially that the probability measure cannot be well-defined on it. But Ross and Peköz use this clever example to maximum conceptual effect; analyzing *why* the probability function makes no sense on it with a great deal of intuition and using no more than basic elementary probability to demonstrate it.

They do the probability calculation as follows: First they note that the sample space of the family is infinite since if you could move either clockwise or counterclockwise around the circle from a fixed point *x* in a finite number of steps a of “length 1” (the maximum value of the probability measure of any sample space) looping around the circle *b* times to return to *x*, then there would be integers *a*, *b* such that *a* = *2**πb*, which implies *π* = *a*/2*b*, which is impossible since *π* is irrational. Now they proceed to show how on this infinite space, the probability can be neither 0 nor 1, since anything in between has been ruled out by the argument above. They define the events as follows:

A = {*x* is the head of the family}

A_{i} = {*x* is *i* steps clockwise on the unit circle from the head of the family}

B_{i} = {*x* is *i* steps counterclockwise on the unit circle from the head of the family}

Since *x* is uniformly chosen and every family has a head, it should be so that P(A) = P(A_{i}) = P(B_{i}) for every i and the total sum must be 1. But then:

1 = P(A) + Σ (P(A_{i)} + P(B_{i})) = *x* + Σ 2*x* for 0 <*x*<1,

which is impossible. *Why* it is impossible motivates a great deal of the presentation that follows, the authors keep returning to this example as they proceed in the book.

Note that this very clever argument by contradiction requires no real analysis, no deep knowledge of the properties of the reals, no measure theory, and no sophisticated analytic machinery whatsoever. And yet it’s a perfectly rigorous argument that satisfies the claim. It is very typical indeed of the approach the authors take — they want to present mathematically complete arguments that require only the most basic knowledge.

The use of rigorous analysis is impressively kept to a minimum in the book while still effectively presenting all the basics of advanced probability theory. More importantly, the authors use constructive arguments such as the one for the non-measurable set to motivate the definitions of the concepts from graduate analysis that they *do* require. For example, the unit circle in the example above is used to define and motivate the definitions of countable and uncountable sets. Real numbers are presented as decimal expansions, something most students should be somewhat familiar with. Chapter 1 continues with the definition of a sigma-field, which is presented set theoretically. Basic set theory is taken to be familiar to students from basic probability theory and this kind of set theory is used repeatedly throughout to build on the student’s previous knowledge of probability. A probability function is defined as a countably additive measure function on a sigma field. The needed basics from measure theory — lim sup and lim inf, Borel fields, the Lebesgue measure, the dominated convergence theorem, are all defined and explained using these two basic definitions, basic set theory and the convergence properties of real sequences. The chapter then uses these concepts to build the foundation of advanced measure theory: random variables, expected value, discrete and continuous distributions, the law of large numbers and the ergodic theorem.

The book then proceeds to build in this fashion. Chapter 2 discusses the Central Limit Theorems using a construction due to Stein and Chen. Sequential convergence is used heavily here, not only to lay out the definitions, but also to construct examples and give computations. Chapter 3 discusses conditional expectation and martingales, assuming the Radon-Nikodin theorem without proof, one of the rare times the authors do this. They also define filters on sigma field sequences, a sophisticated concept but one that should be clear to anyone with basic set theory — this is then used to develop the properties of martingales. Chapter 4 discusses most of the relevant bounding of probabilities and expectations such as Jensen’s inequality, Chernoff’s inequality, conditional expectation inequalities and stochastic orderings. Chapter 5 gives a detailed introduction to Markov chains, emphasizing their irreducibility and reversibility properties. Chapter 6 discusses renewal and Poisson processes, which are the major modeling tools for independent events — including a brief discussion of Queuing theory, Blackwell’s theorem and the general Poisson process. The final chapter was my favorite; it gives an introduction to Brownian motion, using martingales and Brownian constructions to give a very nice alternate proof of the Central Limit Theorem using all the theory that has come before combined with the fact that Brownian motion is a continuous mapping.

To be honest, I’ve been stuck for a long time on what exactly to make of this book. Did I hate the book? No, on the contrary, I liked it a great deal. It is well structured, sticks to its guns on the level of presentation throughout, has *lots* of concrete examples that the authors have put a lot of thought into. There are also a lot of nice problems, none of them too hard and most of them designed to help students come to grips with the impressive volume of theory covered. Ross and Peköz have taken designing this course very seriously and the book shows it. A serious, hard working student who wants to learn this material will be very rewarded by working through it and doing most of the exercises, they will emerge with a fairly deep understanding of the concepts and methods of modern probability. So why am I having so much trouble giving this book a wholehearted thumbs-up?

My biggest mathematical gripe with the book is that authors’ aren’t really clear from jump what level of mathematical proficiency they’re assuming as a prerequisite. They say in the introduction the purpose of the book is to present advanced probability theory “rigorously and with only calculus and a first course in probability as assumed background.” To me, they seem to break this principle — but this really depends on what you mean by *calculus*. There are several concepts from undergraduate real analysis that the authors rely on very heavily in the book, especially convergence theory for sequences and infinite series. They are used over and over throughout the book. There’s also very heavy use of inequalities, particularly in chapter 4. To be able to follow their treatment and understand it, the student will need to be fairly comfortable with the properties of the real numbers, basic proof methods of inequalities and how to work with them. They’ll also need to be able to prove basic limit theorems with convergent sequences and infinite series. Is this reasonable given the authors’ stated aims? Remember, this is a course designed for non-math majors in other disciplines who need to know advanced probability. Do Ross and Peköz expect “epsilonic” convergence calculations and inequality manipulation to be known to non-math major advanced undergraduate/graduate students from their baby calculus course? That’s a stretch to say the least. A much more realistic prerequisite for a course based on this book would be baby real variables or a strong honors calculus course in the spirit of Kenneth Ross’ *Elementary Analysis: The Theory of Calculus* or Spivak’s *Calculus*.

The book could also be used most profitably by mathematics graduate students as either preliminary or concurrent reading to a traditional advanced course in probability; it would supply an intuitive and conceptual approach to balance the brute rigor and difficulty of the standard texts like Dudley or Durrett. It would effectively deepen their understanding of the new concepts any advanced probability course thrusts upon the student.

Still, being able to get what is traditionally second year graduate student mathematics down to the honors undergraduate mathematics level without losing any rigor and with very strong conceptual clarity is a major achievement. Both authors are to be greatly commended for it. Good work, guys — I’m impressed.

That being said, I would still have reservations about using the text for a standard group of undergraduates and/ or non-math majors. I yearn for the old days when everyone took the same courses, whether math graduate students or not, if they wanted to learn this. I was born a generation too late in that regard. In any event, despite its problems, the text represents a quite valuable addition to the literature as it represents an honest attempt to teach this material to undergraduate-level students without a plug and chug handwaving presentation. For that alone, the authors have provided us with a book worth a long, serious look. But instructors are going to have to decide carefully whether or not their students can handle it.

Andrew Locascio is currently using the summer before his second year as a Pure Mathematics Master’s student at Queens College of the City University of New York to master graduate differential geometry and basic algebraic topology before taking courses in deformation theory and advanced topology. He is also attempting to lose weight so a heart attack or a stroke doesn’t spoil those plans by killing him or worse. Sadly, the former is turning out to be much easier than the latter. He is also developing research projects for a student research seminar headed by the Queens College Math Society he helped found to be held in August. He has an opinion on everything. His blog with those opinions — both mathematical and otherwise — can be found at http://categoryofandrewsopinions.blogspot.com/.