When kids in school first learn the laws of "signed numbers", they often have trouble accepting the rule for multiplying two negative numbers. Similarly, when they're told that "anything to the zeroth power is one", they don't believe it. Many teachers haven't a clue how t o explain any of this. (My high-school son tells me that his math teacher said to him the same thing that kids say to each other, "That's just one of those things you have to accept.")

When I teach 101, or Finite Math, or whatever it's currently called at whatever institution I'm adjuncting at, I take five minutes to try to strike a balance between rigor and the students' intuition, defining minus x as "zero minus x"; and deriving some laws using the laws of "regular" subtraction, and for a^{0} = 1, I explain the laws of exponents, then write:

a^{0} = a^{(2-2)} = a^{2} / a^{2} = 1.

Then, if the students seem to be listening, I talk to them of more general mathematical modes of defining things involving arbitrary numbers, such as "trying to make it so that whatever laws are true for positive integers are also true for negative numbers and fractions".

This book goes into the details of this, and more, also striking a balance between intuition and rigor, and, assuming interested readers, opts for more rigor than the classroom situation might allow. This is stuff which I absolutely adored when I first learned it forty years ago, and which I absolutely adore now! Much of it, as I remember, I did *not* learn in school or through assignments, but through browsing in bookstores and libraries (In my case, happening upon originals by, for example, Cantor and Dedekind). It did come up in our courses, both undergrad and grad, and the teachers seemed to expect us to know it, and we did. I would, however, have appreciated knowing about this book (Somehow I didn't — I don't know about my fellow students.)

Not only is the content of this book wonderful, but in many places, the teaching is superb. The author's descriptions and ways of nipping in the bud certain misconceptions are blessings for students and readers learning this material for the first time, and are also interesting for those who already know the material. For example, on p. 7,. after the familiar proof that the square root of 2 is irrational: "What can we say about the distribution of the irrational numbers? That is, are they exceptions — to be found between the rational numbes only here and there?" And on pp. 13-14: "We certainly should not confuse wishful thinking with wish fulfillment... to wish that a number shall exist whose square is 2 or whose square is -1 is not the same as saying that such a number actually exists," Quoting Frege, the author continues, "Why not also ask that a line pass through three arbitrary points? ... one must first prove that these other conditions do not contain contradictions..." (When I teach Abstract Algebra, I often find myself saying, "Mathematicians take what they want — providing that's possible.") And p. 15, concerning "most" people's inclination to think of negative numbers as, for example, debts or below-zero temperatures: "Even if there did not exist in the empirical world a distinction between hot and cold, assets and debits, this would not affect the right to introduce positive and negative numbers." Pretty much every single page contains an example of such perceptive teaching, but I'll skip to a later part of the book, p. 220, where he talks about ultrareal numbers — in this case, a "number system" satisfying some but not all of the properties of the system of real numbers, namely the system of functions with poles at zero: "The scale of the real numbers is no longer sufficient to designate the order of poles. A denser structure belongs to the sequence of poles than to the continuum of real numbers..."

However, there were also many places in this book where I felt that the teaching could have been better or more accurate, or where the inclusion of a simple word or sentence would have made all the difference in the world to a student. For example, p. 3: "the system of integers and fractional numbers is called the system of *rational numbers*. This sytem is closed under all four arithmetical operations". Although he soon makes it apparent that division by zero is an exception, this is just the kind of thing that trips a student up, and prevents her/him from continuing to fully concentrate on the rest of the lesson. And on p. 8 a slight inaccuracy which, again, a student could get hung up on: "These [real numbers] fall into two categories: the periodic and non-periodic decimal fractions..." By "periodic" the author must mean "*eventually* periodic"... And p. 26: "... a^{0} has not even been defined, for the defintion of the power a^{n} applies only if the numbers n are greater than 0. We can, however, supplement the definition arbitrarily to take care of the case n = 0, and now we do so by stating that a^{0} = 1 by definition." (On the bottom of that same page he explains *why* that definition is chosen, but, in my opinion, this occurs too late to avoid confusion in the mindset of the student.)

Skipping ahead a bit, to p. 55: The author introduces "number couples" again, but this time (to the end of "creating" fractions) (a,b) "means" a *divided* by b, whereas in the previous chapter (to the end of "creating" negative numbers) the same notation (a,b) had been used to denote something which "meant" a *minus* b. The use of the same notation to denote two different things (especially without acknowledging this), even when the two things are not, nor ever are, discussed simultaneously, causes unnecessary confusion. On p. 101 he talks about "undecidable" propositions without having previously explained that concept, which is not something that the average non-mathematician or beginning mathematician knows. (The back cover claims that "no formal training in mathematics is necessary to appreciate its clear exposition. . .") And on p. 62 occurs just one example of what I feel is a mis-judgement of which words or phrases to use (I was a student in the 60's, and would have been just as taken aback then as I was, momentarily, a week ago when I read this passage.): " We will now select some point out of the area of the square..." For a few minutes I thought he meant *outside of* the square, rather than, just the opposite, the *inside* of the square.

On p. 159, in his description and explanation of the Koch curve and why it's non-differentiable at every point, he convinces us of this for its *corners* and then says "Entirely analogous considerations can be applied to any other point of the curve", but that's only if we assume that every point on the curve is such a "corner", something which is true but not obvious. On page 194, on Cantor's theory of "real numbers from the rationals", the explanation of the square root of 2 would be much more visual if the terms "rounding up" (and "rounding down") n decimal places were used. On page 210 he writes without proof or brief explanation, "the appearance of such actual-infinitesimal quartities contradicts the Archimedean axiom"; one or two sentences would, I believe, be appreciated by students. Finally, the inaccuracy on p. 214 bothers me: "Let us now return once more to the foundations of analytical geometry. The latter rests on the assumption that there exists a relation between the points of a straight line and the real numbers... Can this be proved? No, rather a new axiom goes into effect here, which *requires* that every point of the straight line can be put in a one-to-one correspondence to the real numbers." He means, of course, "the straight line can be put in a one-to-one correspondence with the set of real numbers."

There is, of course, a slight amount of outdated (and perhaps amusing) stuff about, for example, how Fermat's Last Theorem hasn't been proven yet; on pages 214-215 he used this not-knowing to construct a "real number which is neither positive, negative, or zero." Nowadays we would have to use, perhaps, Goldbach's Conjecture for this.

Perhaps more important than any of this, the author seems to prefer to talk about epsilons without using quantifiers. Since he's so accurate, and willing to risk the displeasure of students and readers in just about everything else, I wonder why he doesn't just say "for any epsilon, no matter how small" — and, for that matter, "for any N, no matter how large"? Also, beginning page 184, he uses the term "convergent" for what I always understood as "Cauchy". He also declines to acknowledge equivalence classes, a concept that might have been clarifying at several places through the book.

I also find it disturbing, as I have upon reading other books on "all the numbers", that quaternions and, in general, "hypercomplex" numbers are introduced without it being noted that their raison d'être, unlike that of negatives, fractions, and complex numbers, is not the solving of *equations*.

There is more than a handful of examples of a sloppy printing job (not, of course, the fault of the author). For example, on p. 55 the formula numbers are omitted, which would be fine if they were not subsequently referred to. On p. 67 the sections are numbered incorrectly. And on p. 86, line 25, "1" should, unless my math is amiss, be "c".

But what bothers me the most — and what I had the most fun with! — is the stuff on pp. 38-39. After defining "number couples" (the ones which "mean" the second subtracted from the first, this to the end of introducing negative numbers) he now defines multiplication of two arbitrary number couples (and will soon zoom into home plate with the law of signs as applied to two negative numbers). But he merely defines, right off, (a,b) x (c,d) = (ac+bd, ad+bc), and then states, "This is an arbitrary convention. It is only justified by the fact that this operation actually has some especially important formal properties of multiplication," then proceeds to show that the operation as defined does indeed satisfy those four formal properties. I feel that motivation *is* possible. Not only that, but this motivation would show that the definition is the *only* definition which satisfies those four properties. I thought about it over a period of several days, and came up with several ideas. The one which seems to hold most water (and which is easiest to summarize) involves first showing that, *for positive numbers* such that a > b and c > d, it's true that (a-b) x (c-d) = (ac + bd) - (ad + bc), so this is how we must "try" to define multiplication of arbitrary "differences", in order to yield a "product difference".

I found the title misleading. I expected a book on Finite or Discrete Math, and was pleasantly surprised. I would call it, perhaps, "Arithmetic and Geometry Justified", or "Elementary Math from an Advanced Standpoint", or even "Non-axiomatic Axiomatic Arithmetic"! At any rate, "justified", "advanced", and "axiomatic" are all commendable qualities.

When I present a proof (or anything remotely rigorous) to non-math-majors, I often find myself commenting, "Rigor in math isn't intended to be an *ugly*, practical, regimented thing. Quite the opposite, it's intended to be beautiful, and in fact is often viewed (by some) as being *im*practical. Also, it's not supposed to be meaningless and non-logical; on the contrary, rigor *means* meaning and logic. It's in that spirit that mathematicians court rigor." This spirit came through big-time in this book.

Marion Cohen ([email protected]) teaches at the University of the Sciences in Philadelphia