The same question haunts the use of published computer code today as haunted the use of printed mathematical tables some 300 years ago: How do I know the text is correct?

Authors of mathematical tables read their tables against the best of the existing tables to vet their own. In the process, they were obliged to publish any errors they found in the tables they read against so there was a self-correcting process in place to drive the error rate in mathematical tables to zero. You will find hundreds of corrections to tables in Raymond Archibald’s journal, *Mathematical Tables and Other Aids to Computation*.

Yet the primary source of errors in mathematical tables remained the printer so any last-minute kerfuffles in the print shop worked against the convergence of this process. It could be argued that the primary practical contribution of Babbage’s analytical engine wasn’t the polynomial gear train but the largely unheralded output device, the plate printer.

Reading down a logarithm table for errors is a far easier task (if not infinitely more tedious) than finding errors in computer code. The first organized effort to publish code, *The Collected Algorithms of the ACM*, included two self-correcting mechanisms analogous to the lists of errors in mathematical tables: Certification and Remarks. The third algorithm in the series, “Solution of Polynomial Equations” by Bairstow-Hitchcock Method by A. A. Grau of the Oak Ridge Laboratory for example received two Certifications and two Remarks. Thirty-two years after the launch of *Collected Algorithms*, Don Knuth took a stab at the problem of building trust in published code in his book *Literate Programming* and the associated software tools, Tangle and Weave.

Explicit efforts to build trust in computer codes—published or not—have by and large been abandoned today. This is primarily due, I suspect, to the quantity, complexity, and layering of the code we rely on. “The Annotated Transformer” by Alexander Rush published in the *Proceedings of the Workshop for NLP Open Source Software* in 2018 is a recent exception. Rush’s article which harkens back to Knuth’s *Literate Programming* has received very favorable reviews but its intent is more to understand an algorithm than to build trust in a particular implementation of the algorithm.

All of which brings us, perhaps to the reader’s surprise, to *Numerical Methods of Mathematics Implemented in Fortran* by Sujit Kumar Bose. Chapter 2 to the last chapter, Chapter 10, are workmanlike recitations of the expected numerical algorithm classes: roots, matrices, interpolation and approximation, ODE, PDE, FFT, etc. The biography is spare, consisting of forty-six books, primarily from the 1960s, 70s, and 80s, divided roughly half-and-half between theory and application. There is no index to the FORTRAN routines. Beyond stubs that just call working code I counted thirty-two FORTRAN implementations, over half of which were simply matrix manipulations. There was no code in the short, 26-page chapter on PDEs. Some chapters have exercises, others don’t. Some theorems have proofs, others don’t.

On purely technical grounds the book is in the middle of the pack. It covers the territory, strikes a reasonable balance between theory and application, and includes a large number of worked-out examples. I would still reach for Press et al. *Numerical Recipes in FORTRAN* and consult calgo.acm.org and netlib.org but that may simply be habit.

There is one serious shortcoming of the book and it cannot be attributed to the author. The typesetting and layout are atrocious and there is no evidence of any proof-reading. Computer code appears in a number of different fonts, styles, and formats, none of them satisfactory. Code should be set in a monospace font and adhere to one common style throughout.

Which brings us to Chapter 1 and having trust in published code. The introductory chapter of the book is chock-a-block with typographical errors. We are told, for example, that

\( 13 = 1 \times 10^{!} + 3 \)

and

\( 13 = 13.0 = 0.14 \times 10^{2} \)

Knowing that typographical errors in code are both harder to detect and, should the code be used, have more dire consequences than typographical errors in running text, the reader might be understandably leery of using the FORTRAN in the book. If the code can’t be used then, why is it in the book and its title?

The overlap between mathematics and computer science is growing rapidly and, I might note in passing, that it is about time and just in time. We desperately need ideas about a calculus of algorithms from the mathematics department and we desperately need to see more attention to formal proofs of code from the computer science department. There is enormous theoretical and practical synergy waiting to be tapped here. And while it may seem petty to fret about things like layout and fonts, perhaps more than practitioners in any other field, mathematicians know that notation is part and parcel of understanding.

Scott Guthery is the founder and lead TeX-setter at Docent Press.