You are here

Discrete Algorithmic Mathematics

Stepehn B. Maurer and Anthony Ralston
Publisher: 
A.K. Peters
Publication Date: 
2004
Number of Pages: 
772
Format: 
Hardcover
Edition: 
3
Price: 
88.00
ISBN: 
01-56881-166-7
Category: 
Textbook
BLL Rating: 

The Basic Library List Committee strongly recommends this book for acquisition by undergraduate mathematics libraries.

[Reviewed by
Alex Bogomolny
, on
03/1/2005
]

I don't know half of you half as well as I should like;
and I like less than half of you half as well as you deserve.

Bilbo Baggins
J. R. R. Tolkien, The Fellowship of the Ring

 

To paraphrase the memorable speech given by Bilbo Baggins on the occasion of his eleventy-first birthday, I liked less than half of this book half as well as it deserves. Let me explain.

There is little doubt that the authors went to a considerable length to convey their ideas and to present the material in a comprehensible and edifying manner. The book contains chapter summaries, a list of symbols, notations, abbreviations and conventions, a Prologue that explains what Discrete Algorithmic Mathematics is about, an Appendix, Hints and Answers to selected problems, and an Index. Many of the topics are covered from several perspectives and with different verbiage, many of the theorems have been supplied with several proofs. The book features Chapter 0 with mathematical preliminaries. Numerous and carefully selected exercises at the end of every section relate to the material in a significant manner, and, on the whole, complement nicely the contents of the book.

Besides the Prologue, the ancillary Chapter 0 and the Epilogue (the latter containing discussions on DNA matching and the Minimax Theorem), the book is composed of 7 chapters:

  • Chapter 1. Algorithms. Examples of algorithms and introduction of the authors' algorithmic language AL.

  • Chapter 2. Mathematical Induction. Examples of inductive proofs and definitions. Variants of induction. Application of induction to the study of recursive and iterative algorithms. False inductions.

  • Chapter 3. Graphs and Trees. The usual introductory topics. Eulerian and Hamiltonian paths and cycles. Shortest paths. Breadth first and depth first searches. Coloring problems. Trees.

  • Chapter 4. Fundamental Counting Methods. The sum and product rules. Permutations and Combinations. Combinatorial identities and algorithms. The Binomial Theorem. Problems with balls and urns. Generating functions. Pigeonhole.

  • Chapter 5. Difference Equations. Modeling with difference equations. A Fundamental Theorem. Linear homogenous equations with constant coefficients. Nonhomogeneous equations. Nonlinear equations. Finite differences.

  • Chapter 6. Probability. Probability space. Conditional probability, independence and Bayes' Theorem. Random variables and distributions. Expected value. Variance. Statistical estimation. Applications to algorithmic proofs. Recursive methods in probability.

  • Chapter 7. An Introduction to Mathematical Logic. The propositional calculus. Validity and Tautology. Boolean Algebra. The Predicate Calculus. Algorithm verification using the Predicate Calculus.

The book neatly splits into two parts. In Chapters 1-3 (pp. 9-320), where mathematics is presented in algorithmic format, almost every page carries a sample of AL usage. In Chapters 4-7 (pp. 321-689), mathematics exposition is more conventional. Only towards the end of each chapter mathematics is applied to the study of algorithms. The distinction is never mentioned in the Instructor's nor in the Student's Preface. To the instructor the authors write,

Our attitude towards algorithms is summarized by our title. It is not "Discrete Mathematics"; it is not even "Discrete Mathematics with Algorithms"; it is "Discrete Algorithmic Mathematics". "Algorithmic" is an essential modifier of "Mathematics". Of course, we do not mean "algorithmic" in the sense of rote application of rules, but rather in the sense of algorithmics, which means thinking in terms of algorithms and their mathematical study.

This is an interesting concept, and I was eager to see it executed. My expectation was fostered at the beginning of Chapter 1 whose focus is on "one of the most important entities used to communicate mathematical thoughts: the algorithms". However, I was disappointed. The thinking in terms of algorithms and their usage as vehicles of communication tapers out towards the end of Chapter 3. In the last two sections of Chapter 4, the authors compare three algorithms for generating random permutations, present an algorithm that lists all permutations, and devise an algorithm that, for a given number sequence, produces its longest monotonic subsequence. The caption "Discrete Mathematics with Applications to Algorithms" would describe the composition and the spirit of the remaining chapters more accurately.

On quite a few occasions, especially in the early chapters, I had problems with the authors' prose. This is most unfortunate for, as one learns from the Student's Preface, the authors intended the book to be read: "The text of each section is meant to be read as a whole, not just used for chunks to pull out as needed when doing homework." The recurrent problems I had with the book's language ranged from displeasure with some trifles to discomfort with a few explanations to complete disagreement with certain definitions. I'll give several examples. The trifles first.

pp. 40-41

If you understood our explanation of Eq. (2), you'll immediately see that it had two advantages over the use of ellipses:

  • It is more compact.
  • Whereas the pattern ...

This is a trifle: the first item is superfluous. To see the compactness of P(x) = ∑ aixi as compared to the explicit P(x) = anxn + an-1xn-1 + ... + a1x + a0 takes no understanding and does not require any explanation. Here is another one:

p. 41

Without such use of subscripts, summation notation would not be possible, for how could you possibly represent, say, a, b, ..., z [meaning of course a + b + ... + z. – AB] by a single symbol?

But just 2 pages later we read

In full generality we may write

R(i) Ai, (5)

where R(i) is a function, often called a predicate...

And later, on the same page,

We may, in fact, generalize (5) even further ...

x∈T f(x) or we may write ∑S⊂T f(S).

Hmm, no use of subscripts ... Yet another example:

p. 57

(In fact, one can distinguish between several strengths of implication, and in some books ⇒ refers to only one of them. For instance, (x = 2) ⇒ (x2 = 4) is a stronger implication than "if this is the Sears tower, then this is the tallest building in the world." Although the former claim is always true, the latter switched from true to false between the first and second editions of this book. At any rate, we will not distinguish between levels of implications and we use ⇒ for all of them.)

True to their word, the authors never again mention different levels of implication. Thus, at best, the above paragraph serves no useful purpose. At worst, it may be confusing. It was to me. Just because one implication became false at a point in time does not mean that it was anything but true before that point. And yet another one:

p. 65

There are many ways to classify proofs. An important distinction in this book is between existential proofs and constructive proofs. Constructive proofs have an important subcase, proofs by algorithm. These three categories are discussed by example beginning on p. 252. Another important type of proof is by mathematical induction, the subject of Chapter 2. All these types can involve either direct or indirect arguments.

In the same paragraph, classification of proofs is referred to as "classification", "subcase", "category", and "type". Thus the paragraph leaves one wondering whether mathematical induction forms a third piece of a trichotomy on the same level as existential and constructive proofs or is a subcase of either (or both) of those, if at all. And more:

p. 65

While proof is the standard of justification in mathematics, lesser types of evidence have their place. What do we mean if we ask you to "verify" or "show" something instead of prove it?

It is a fact (shown in Chapter 4) that each n-set has exactly 2n subsets. ... In short, for an example "verify" and "show" mean check it out in that case by any method, including brute force.

Curiously, the aforementioned fact has been certainly proved in Chapter 4 (p. 351), not just shown. And, of course, "brute force", if it works, is as valid a method of proof as the fancier ones.

p. 210

We seem to have said that induction is the technique to use when you have infinitely may cases to prove; induction allows you to prove them by proving them one at a time.

I believe the right wording here should be "... proving them all at once".

The above and other similar occurrences of unfortunate language could be easily corrected and, in many cases, could be simply left out without affecting the text. But there are instances of a different sort, where the language slips would require a more thoughtful correction. Following are a few examples.

p. 7

Whereas classical mathematics is about formulas, discrete mathematics is as much about algorithms as about formulas.

One may agree or disagree with the authors as regard discrete mathematics — the second part of the assertion — for it's merely conditional: there is room to think of a mysterious third component, say concepts and ideas, that might outweigh the balance between algorithms and formulas. But I find the first part (classical mathematics is about formulas) outright offensive and its appearance in a textbook unpedagogical.

Sometimes students may sense condescension on the authors' part:

p. 100

Algorithm HANOI looks short and sweet, but you may not find it all easy to understand.

This is on introducing the Tower (Towers, in the book) of Hanoi recursive algorithm. And a few pages later,

p. 104

Often, however, there is no straightforward iterative approach. This is the case with the Towers of Hanoi problem which is naturally recursive. Although it can be implemented iteratively, the iterative form is trickier and not so easy to understand.

Student understanding is affected by the amount of explanation the text provides. The authors do offer two iterative algorithms for the Tower of Hanoi problem as exercises with no explanations at all. So the comparison with the recursive version that is chewed on throughout the book is plain unfair. Furthermore, as reference examples, the authors chose two iterative algorithms that most likely came from programming folklore, each based on a clever observation or idea which may indeed be hard to grasp. However, the authors' assertion is completely general and encompasses all possible iterative algorithms. This makes it also false. There exists an intuitively clear, although hardly efficient, iterative solution for the Tower of Hanoi problem. The algorithm in fact does more. It is capable of producing a unique move for every valid configuration of disks in the problem such that a sequence of those moves will eventually lead to the solution. (This approach has been explained and implemented on the web.)

p. 103

Might self-invocation (recursion) get out of hand? That is, might you dig a hole so deep — that is, invoke the computations so many times over completing an execution of it — that you could never climb out with a solution to the original problem?

The metaphor is lost on me, but it made me wonder why the authors have singled out recursion to issue a warning about. (See also their explanation of how best to think of recursive calls in AL.) It is as easy to spin an endless loop as to dig a bottomless recursion.

I found the following passage quite disturbing:

p. 137

In this book, you will see many proof techniques. We hope you will develop a general understanding of them all. For mathematical induction we have a higher goal. We want you to understand it so well that you can do it yourself, even in this first course.

This is really bewildering. A note in this spirit might be suitable in the Instructor's Manual, but it must be discouraging for a student to come across it in a textbook.

p. 147

But why make this fuss between a particular n and all n? The reason for the fuss...

I do not understand why drawing students' attention to the fact that in the inductive step P(n) ⇒ P(n+1), n stands for an arbitrary single value, unlike in, say, the formula (n - 1)(n + 1) = (n2 - 1), where n is any number, should be referred to as a "fuss".

In Chapter 1, the authors observe that "Mathematics requires both precision of thought and precision of language." It is truly unfortunate that the book sometimes lacks in both.

p. 76

Even if you haven't encountered the term "algorithm" before, we hope that Example 1 gave you an intuitive feeling for what this terms means. But for a term that will play such an important role in this book we should be more specific. Any algorithm must have the following properties:

  1. Finiteness. Application of the algorithm to a particular set of data must result in a finite sequence of actions.
  2. Uniquely determined initial step. The action which starts off the algorithm must be unambiguously determined. This is true in AL because we always begin with the line following the line with the name of the algorithm.
  3. Uniquely determined succession. Each action in the algorithm must be followed by only one valid successor action.
  4. Solution. The algorithm must terminate with a solution to the problem it purports to solve, or it must indicate that, for the given data, the problem is insoluble by this algorithm.

I mention in passing that the students may be assumed to have already encountered the term "algorithms" on p. 7:

The fact is, we typically use formulas like Eq. (1) only as stepping-stones to organized procedures for computing answers. Such procedures are called algorithms.

There is a real issue with the text on p. 76: is this a definition or a statement? The authors I believe meant it to be the former. However, without the forgotten sentence from p. 7 — Such procedures are called algorithms — the properties enumerated on p.76 appear dangling. For a term that is supposed to play such an important role in this book the authors should have been truly more thoughtful.

Later on p. 76 the authors write:

Thus, an algorithm is nothing more — or less — than a precise recipe for carrying out a sequence of operations to solve a problem.

This is indeed what I was taught and this is how I think about the algorithms. But, in this generality, not all algorithms possess properties 1 and 4 from the listed above. The authors have in fact put themselves into a corner by demanding all four properties as part of the definition. The strict requirements 1 and 4 make the statements, like (p. 174)

Since loop invariants only prove algorithms correct if the algorithm terminates...

meaningless. An algorithm that satisfies 1-4 is correct and terminates by virtue of the definition. Thus, the authors appear completely justified giving the following example:

p. 174

Show that the following is not an algorithm

Algorithm WHAT'S-WRONG

a ← 4
repeat while a ≠ 2
a ← (a+2)/2
endrepeat

Within such a framework, it makes no sense to ask, as is naturally done throughout the book, whether an algorithm is correct. The only sensible question to ask is "Is this an algorithm?", for, as long as the latter question remains unanswered, the correctness of whatever-else-this-might-be stays moot. On the other hand, as soon as we have an answer to that question, the correctness comes gratis, as a bonus of the definition.

Property 2 is also questionable. An algorithm should of course specify the type of its own input. Its correctness should be judged against the data of the specified type. In the book, this requirement is beautifully incorporated (Chapter 7, p. 638) into a formal algorithm verification framework as input specifications (IS). As mentioned in the descriptive introduction of the term "algorithm", algorithms are "organized procedures" and should be invoked as such with IS compliant data. I do not understand why property 2 should be requested specifically of "algorithms" and not of other programming constructs — procedures and functions.

(In passing, the assertion made in the body of property 2 description that it automatically holds in AL is not always correct. Whenever a sequence of operations is collected into an AL procedure or function, the authors usually — in fact, always, to the extent I could judge — place the body of the procedure immediately following the line with the name of the algorithm. On such occasions, the entry point of the algorithm is located at, or next to, the bottom of the algorithm.)

The description of procedures and functions in the book is somewhat confused, and unnecessarily so. The AL syntax for procedures and functions is identical, with two distinctions:

  1. Procedures may have, as arguments, the out and inout lists of variables that pass to them by reference (pp. 107-108). Procedures may modify these variables; the modified values become available outside of procedures. Functions may only accept the in arguments.

  2. Functions return (and manipulate) a value in a variable that shares the function's name.

Unlike procedures, functions are not assumed to affect values of the variables that have been (implicitly) defined outside their body.

p. 97

... a function is just a portion of an algorithm which computes a value when called upon by another portion of the algorithm. A procedure is also a portion of an algorithm that is called upon from some other portion, but whereas a function has one specific task (to compute and return a value), a procedure can do any sort of thing...

However, AL does allow global variables whose scope is the body of the algorithm, which makes them available to both functions and procedures for modification. Thus, the AL syntax permits functions to generate "side effects". The authors do not condone and even advise against this practice, but place no formal restrictions that could prevent it.

There is also some confusion in the treatment of global and local variables. For example, the namesake of a function is declared a global variable:

p. 112

The rules for what variables are local are exactly the same as for procedures, and thus in a function every non-global variable except funcname is local; only funcname affects higher levels.

I take an exception with this exception: funcname is still not a global variable. It is available nowhere but in the body of the function funcname.

On the whole, much of the confusion could have been avoided by introducing two standard attributes — scope and life span — of every variable. This would help obviate repeated and confusing discussions of variables "with the same name":

p. 83

However, when the values at each stage do not have to be saved for the next stage, it's convenient and efficient in the algorithm to call the quantities by the same name each time through the loop.

I am just curious what the use of a loop would be if the variables within its scope had names different from one pass to another. And further on the same page:

Nevertheless, we do have to do some saving from one pass through the loop to the next. This is accomplished using the temporary variable rem.

repeat until denom = 0

quot ← [num/denom]
remnum – (quot × denom)
numdenom; denomrem
endrepeat

There is nothing saved from one pass through the loop to the next. The variable rem is indeed temporary (in a colloquial sense), but I guess its scope is limited by the repeat/endrepeat construct.

Lastly, the authors seem to have a soft spot for recursion:

p. 94

While procedures and functions do not have to involve recursion, their use to implement recursion will be their main purpose in this book.

This is an unnecessary declaration, but it looks like the authors really meant it. For example, in a discussion of the recursive algorithm EUCLID-RECFUNC, they explain

p. 95

Think of it this way: Each time function gcdf is invoked, it is as if a new copy of its definition is made, but one in which the new values for i and j are substituted. The previous copy is interrupted but is still there to be returned to later at the place where it was interrupted.

Similarly (and additionally), for the HANOI algorithm:

p. 101

The first invocation of H within H(3, 1, 3), namely, H(2, 1, 2), causes us to interrupt the normal sequential execution of steps of the algorithm and go back to the beginning of a new copy of the procedure.

Regardless of whether it is possible to go back to a new copy, the two passages are confusing. First, calling a function or a procedure from another function or a procedure is absolutely normal in programming. Second, if the "interruption" means pushing local variables on a stack and passing control to another function or procedure, then the process is common to all function and procedure invocations, not just recursive calls. And third, functions and procedures are not copied. I do not understand why such an imagery has been deemed more transparent than the common push/pop stack mechanism for the variables with the scope defined by the calling portion of the algorithm:

pp. 110-111

The answer is found in the following convention for all procedures (and functions), recursive or not: Each call of a procedure creates a template (collection) of entirely new local variables. ... When each call is made, the local variables at the level from which the call is made (the calling level) are put aside in favor of those in the template created at the new level, and when each level is completed and control is returned to the calling level, the template in the calling level is restored.

Such downplaying of regular — as opposed to recursive — function calls may lead to avoidable question marks in the otherwise very lucid discussion on the call graphs (p. 237), where cycles indicate indirect recursion. I also find the earlier explanations (p. 95 and p. 101) somewhat in contradiction with the more general one.

Finally, let me say this. As I mentioned at the beginning, the book consists of two parts in which algorithms play distinctly different roles. The two parts also differ in the quality of the prose and general exposition. The second, more conventional, part is by far more readable. The use in Chapter 7 of Predicate Calculus for algorithm verification is outright engaging. The first part, I believe, suffers from the introduction of AL, the authors' algorithmic language. Too much effort goes into its definition, and the results are still inferior. All the mathematics in the first part might have been explained as well with the use of the traditional pseudocode. On the whole, the foundational first part, the part that is truly supposed to be read by the students, has proved a hard reading. For this reason, I can't recommend the book to its intended audience. I would certainly like the first part to be taken better care of. However, the book contains enough interesting and significant mathematics to make me look for a 4th edition with excitement and anticipation.


Alex Bogomolny is a business and educational software developer who lives with his wife and two sons — 25 and 5 — in East Brunswick, NJ. This past summer, his web site Interactive Mathematics Miscellany and Puzzles was honored with a 2004 MERLOT Classics award.

 


List of Algorithms inside front cover
Instructor's Preface ix
Student's Preface xiv
Chapter Summaries xvii
Pathways Through the Book xx
Problem Difficulty Rating xxii
Symbols, Notation, Abbreviations and Conventions xxiii
PROLOGUE What Is Discrete Algorithmic Mathematics? 1
CHAPTER 0 Mathematical Preliminaries 9
0.1 Sets, Relations, and Functions 9
0.2 Some Important Functions 20
0.3 Growth Rates and Order Notation 30
0.4 Summation and Product Notation 40
0.5 Matrix Algebra 50
0.6 The Language and Methods of Reasoning 56
Supplementary Problems 69
CHAPTER 1 Algorithms 73
1.1 Some Examples of Algorithms 73
1.2 Aspects of AL 88
1.3 Recursive Algorithms 94
1.4 Algorithmic Language: Procedures and Functions 105
1.5 The Analysis of Algorithms 117
Supplementary Problems 131
CHAPTER 2 Mathematical Induction 135
2.1 Introduction 135
2.2 First Examples 137
v
vi Contents
2.3 Strong Induction and Other Variants 151
2.4 Induction and Recursive Algorithms 158
2.5 Induction and Iterative Algorithms 168
2.6 How to Conjecture What to Prove 180
2.7 Inductive Definitions 194
2.8 Faulty Inductions 201
Supplementary Problems 210
CHAPTER 3 Graphs and Trees 217
3.1 Examples and Terminology 217
3.2 Paths, Cycles and the Adjacency Matrix 236
3.3 Eulerian and Hamiltonian Paths and Cycles 250
3.4 Shortest Paths 266
3.5 Breadth First Search and Depth First Search 278
3.6 Coloring Problems 285
3.7 Trees 296
Supplementary Problems 311
CHAPTER 4 Fundamental Counting Methods 321
4.1 Introduction 321
4.2 First Examples: The Sum and Product Rules 322
4.3 Subtler Examples and Further Rules 329
4.4 Permutations and Combinations 340
4.5 Combinatorial Identities and Combinatorial Arguments 348
4.6 The Binomial Theorem 355
4.7 Four Common Problems with Balls and Urns 365
4.8 Generating Functions 375
4.9 Combinatorial Algorithms 385
4.10 Algorithmic Pigeonholes 399
Supplementary Problems 406
CHAPTER 5 Difference Equations 411
5.1 Introduction 411
5.2 Modeling with Difference Equations 413
5.3 Getting Information from Difference Equations 426
5.4 Terminology and a Fundamental Theorem 433
5.5 Constant Coefficient Homogeneous Linear Difference Equations 439
5.6 Qualitative Analysis 451
5.7 Nonhomogeneous Difference Equations 457
5.8 Applications to Algorithms 462
c Copyright A K Peters 2004. For review by instructors only.
Contents vii
5.9 Variable Coefficients, Sums, and Recent Advances in Computer
Algebra 479
5.10 Nonlinear Difference Equations 484
5.11 Finite Differences 495
Supplementary Problems 508
CHAPTER 6 Probability 513
6.1 Introduction 513
6.2 Probability Space 517
6.3 Conditional Probability, Independence, and Bayes' Theorem 527
6.4 Random Variables and Distributions 539
6.5 Expected Value 557
6.6 Variance 569
6.7 Statistical Estimation 577
6.8 Applications to Algorithms: Proofs of Prior Claims 586
6.9 Recursive Methods in Probability 595
Supplementary Problems 606
CHAPTER 7 An Introduction to Mathematical Logic 611
7.1 Introduction and Terminology 611
7.2 The Propositional Calculus 616
7.3 Validity and Tautology 630
7.4 Algorithm Verification 637
7.5 Boolean Algebra 643
7.6 The Predicate Calculus 663
7.7 Algorithm Verification Using the Predicate Calculus 676
7.8 Theorems and Proofs 682
Supplementary Problems 689
EPILOGUE Coming Full Circle with Biology and Minimax Theorems 691
E.1 DNA Matching 691
E.2 Algorithmic Mathematics and Minimax Theorems 707
Final Problems 717
References 723
Appendix: Limits 729
Hints and Answers 733
Index 761
About the Authors inside back cover