*Validated Numerics* by Warwick Tucker is more narrow than its title may imply. The subtitle is a bit more specific: *A Short Introduction to Rigorous Computations*. Even more specifically, this is a book about interval analysis.

As one might expect, interval analysis works with intervals rather than simply with numbers. For example, if you know the side of a square is somewhere between 2 and 3, then you know that the area of the square is somewhere between 4 and 9. Real applications are much more complicated but follow the same basic principle.

One justification for interval analysis is the ability to represent uncertain quantities without a loss of information. For example, if a measured quantity is believed to be between 3.5 and 4.5 centimeters, one could carry out calculations with the interval [3.5, 4.5] rather than the single point 4 in the middle. Another justification, one emphasized in *Validated Numerics*, is to work around the limitations of computer arithmetic. For example, π is an irrational number and so cannot be represented exactly in computer arithmetic. (Neither can most rational numbers be represented exactly. See Anatomy of a floating point number for an explanation of the limits of computer arithmetic.) To avoid using an approximation for π, one could work with an interval [*a*, *b*] containing π, where *a* is the largest number less than π that can be exactly represented in a computer. Likewise *b* would be the smallest machine-representable number greater than π.

Given the promise of interval analysis, why is it not more widely used? Here are three reasons.

- Interval bounds on computed quantities can be so large as to be impractical. Because interval estimates are based on what is
*possible* rather than what is *probable*, the bounds can become extremely conservative after a long sequence of calculations.
- Floating point error is often orders of magnitude smaller than other sources of error in application. If you are estimating the weight of a cow by modeling her as a sphere filled with water, it hardly matters that your value of π is only good to 15 decimal places.
- Interval arithmetic is tedious, as
*Validated Numerics* illustrates.

Regarding the first reason, imagine estimating some physical quantity by averaging a large number of measurements. The *possible* error grows with each measurement, while the *expected value* of the error decreases. This observation may not seem profound, but brilliant mathematicians saw things differently in the 18th century. Because they thought in terms of logic rather than probability, they thought that using more data would decrease the accuracy of predictions. The statistical mindset of modeling errors as random variables did not develop until the 19th century.

Despite the objections to interval analysis listed above, there are instances where interval analysis is quite valuable. For example, suppose you believe you found a point *z* where Re ζ(*z*) is slightly more than 0.5. It’s no good to say that the real part is 0.50000000001 as far as your program computes it. But if you could demonstrate that the real part lies in an interval with lower bound 0.500000000005, you would be famous because you would have disproved the Riemann hypothesis.

*Validated Numerics* gives a thorough introduction to interval analysis and its applications in a small space, about 90 pages devoted to interval analysis plus around 50 pages of introduction and appendices. The book covers the basic theory of interval analysis and its application to automatic differentiation, root-finding, optimization, quadrature, and ordinary differential equations.

While *Validated Numerics* goes into theoretical detail, it also includes a computational perspective. The book sprinkles in examples of Matlab code and includes four computer labs. Although interval analysis can be complicated, the book discusses software libraries that hide much of the complication.

John D. Cook is a research statistician at M. D. Anderson Cancer Center and blogs daily at The Endeavour.