- Membership
- MAA Press
- Meetings
- Competitions
- Community
- Programs
- Students
- High School Teachers
- Faculty and Departments
- Underrepresented Groups
- MAA Awards
- MAA Grants

- News
- About MAA

Publisher:

Birkhäuser

Publication Date:

2006

Number of Pages:

265

Format:

Hardcover

Edition:

2

Price:

59.95

ISBN:

0-8176-4372-9

Category:

Monograph

[Reviewed by , on ]

Luiz Henrique de Figueiredo

02/21/2006

Have you ever wondered how calculators and computers manage to find the value of sin(24) and the like? If so, then this book will definitely have something for you. Actually, it will probably have much more than you ever wanted to know, if you're not a numerical library designer or hardware designer. But don't let that discourage you.

You probably know or won't be surprised to learn that calculators and computers have special circuitry for the basic arithmetic operations: addition, subtraction, multiplication, and division. But how do they handle more complicated functions like square root, sin, and exp? These are called the elementary functions because they show up everywhere in both theoretical and applied science. These functions are not rational functions and so cannot be evaluated using any finite sequence of basic arithmetic operations — some sort of approximation must be used. This is the subject of Muller's book. (You'll probably remember Newton's method for computing square roots. You may be surprised to learn that sometimes division is also performed with Newton's method, even in hardware.)

There a few classic books on algorithms for computing elementary functions: Hart et al.'s Computer Approximations [1], Cody and Waite's Software Manual for the Elementary Functions [2 ]. These books focused on software implementation using polynomial approximations. Perhaps Muller's book is destined to become a new classic in this subject, but only time will tell.

Muller's book focuses on the elementary functions. For learning about computer arithmetic in this era of the IEEE standard for floating point arithmetic, read Goldberg [3], Overton [4], and of course Knuth [5]. Higham's treatise [6] contains a wealth of information and is destined to become a classic along with Wilkinson's books [7,8 ]. However, Higham concentrates on numerical linear algebra (not even eigenproblems) and says nothing about elementary functions.

Muller's book contains few theorems and even fewer proofs. It does contain many numerical examples, complete with Maple code. The book is not suited to the usual classroom course style. Nevertheless, it may be used for an enjoyable reading course, if the instructor and the students want to do some work.

The book contains over 300 references. I was surprised to learn that several books on computer arithmetic have been published in the last 10 years. You'd say that computer arithmetic was a topic for the early computer builders, way back in the 1950's!

The book has three parts. The first part deals with polynomial and rational approximations, table-based methods, and multiple precision.

You'll remember from calculus that well-behaved functions can be approximated by Taylor polynomials. However, these approximations give good results only around a point, not over an wide interval. Polynomial approximations suitable for evaluating elementary functions must be good over a wide interval. If the interval is too wide, the degree of the approximating polynomial may be high. Sometimes you get better and simpler approximations by using rational approximations. Finding good polynomial or rational approximations is the realm of classical approximation theory and many nice results exist.

The first part also discusses table-based methods, which are the modern version of logarithm tables, if you're old enough to have ever seen or even used one. The idea is to store the values of the desired function taken on a sample of its domain and then to use interpolation for all other arguments. In these days, memory is cheap and table-based methods seem attractive. However, the efficient use of caches is now an issue and interpolation algorithms must take cache behavior into consideration.

An unexpected need for very high-precision computation arises in experimental mathematics, a line of research that has been growing lately. Experimental mathematics is performing computer-aided research for finding and sometimes proving conjectures. Although this frequently involves computing results to a large number of digits, it is not merely that. For instance, numerical coincidence does happen and can be misleading — that's why we need very high precision. Try evaluating exp(π √163). The value looks like an integer for the first 20 or so digits, but it is not! (The explanation for this phenomenon needs some deep algebraic number theory.)

The second part discusses shift-and-add methods, which are very suitable for hardware implementation, but can be useful also for software implementation. Among these methods is the famous CORDIC method, which has been used in many pocket calculators since the HP 35 and also in the Intel 8087 arithmetic coprocessor. CORDIC is a clever algorithm for computing trigonometric functions using just additions and multiplications by 2; these are the easiest operations for binary computers. This makes CORDIC attractive for hardware implementation, even though it is not particularly fast (it finds one digit per step). CORDIC can be adapted to use base 10 instead of base 2, which is good for pocket calculators because they usually work directly in base 10 to avoid conversion during input and output. For an introduction to CORDIC, see Parris [9 ].

The third part deals with range reduction, final rounding, etc. Range reduction is a crucial step in evaluating elementary functions. For instance, sin is defined over the whole real line and so it makes sense to ask for the value of sin(10^{20}). For that you first reduce 10^{20} to the interval [-π,π] and then it's easy to evaluate the sine. However, it is not trivial to reduce 10^{20} to that interval. Muller reports that you'll probably get different results from different machines.

Final rounding is deciding which of two competing floating-point numbers to use to represent a given value. Although this sounds like nitpicking, it is actually very important: an incorrect rounding may destroy monotonicity and symmetry properties of the function you're evaluating, and this can lead do disaster further down.

In summary, this book seems like an essential reference for the experts (which I'm not). More importantly, this is an interesting book for the curious (which I am). In this case, you'll probably learn many interesting things from this book. If you teach numerical analysis or approximation theory, then this book will give you some good examples to discuss in class.

**Refernces**

[1] J. F. Hart et al., Computer Approximations, Wiley, 1968.

[2] W. J. Cody, Jr. and W. Waite. Software Manual for the Elementary Functions. Prentice Hall, 1980.

[3] D. Goldberg, What every computer scientist should know about floating-point arithmetic, ACM Computing Surveys, 23(1), March, 1991.

[4] M. L. Overton, Numerical Computing with IEEE Floating Point Arithmetic, SIAM, 2001. http://www.cs.nyu.edu/overton/book/

[5] D. E Knuth, The Art of Computer Programming, Addison-Wesley, 1997.

[6] N. J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM, 2002. http://www.ma.man.ac.uk/~higham/asna/

[7] J. H. Wilkinson, Rounding Errors in Algebraic Processes, Prentice-Hall, 1963.

[8] J. H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University Press, 1965.

[9] R. Parris, Calculator Algorithms, 2001. http://math.exeter.edu/rparris/peanut/cordic.pdf

Luiz Henrique de Figueiredo is a researcher at IMPA in Rio de Janeiro, Brazil. His main interests are numerical methods in computer graphics, but he remains an algebraist at heart. He is also one of the designers of the Lua language.

The table of contents is not available.

- Log in to post comments