Among the fun math books I have on my overburdened bookshelves is John Conway and Richard Guy’s fascinating volume, *The Book of Numbers* (1996). In following up one of the topics discussed in its very last chapter, I discovered that Conway and Guy had made a bibliographic error, which in the interests of scholarship should be publicly noted. While I could give the correction in a line and be done with it, the topic and its background are curious enough to merit a few paragraphs. To wit:

Anybody who has had a brush with calculus is familiar with taking *derivatives* of a function. The derivative of a function is a whole new function which gives the *rate of change* of the original; plug a function into the machine, and out comes a new one, which is also just as “complicated” as the one you started with. If your initial function was something like

[tex]f(x) = x^2,[/tex]

which maps each real number to a real number, then the derivative will be something like

[tex]f^\prime(x) = 2x,[/tex]

which *also* maps elements of [tex]\mathbb{R}[/tex] to elements of [tex]\mathbb{R}[/tex]. That’s a whole lot of mappings! If we were so inclined, we could also represent the “growth rate” of functions by *numbers,* instead of by *functions.* The operation of “finding the growth rate” would then be a *functional,* mapping functions to numbers — though the sort of numbers we find ourselves using are a little out of the ordinary.

Conway and Guy tell the story at the conclusion of their *Book of Numbers.* Let’s consider the sequence of functions

[tex]f(x) = x,\ f(x) = x^2,\ f(x) = x^3,\ \ldots,[/tex]

each of which grow bigger and bigger as [tex]x[/tex] tends to infinity. What’s more, each function in this sequence grows faster than the ones before. Why not, Conway and Guy suggest, call the *growth rate* of the first function 1, the growth rate of the second function 2, and so on?

Having made such a definition, what’s the growth rate of, say, [tex]e^x[/tex]? Well, thinking back to our experience with derivatives, we know that each time we differentiate a power of [tex]x[/tex], we knock the exponent down by 1, which translates to reducing its growth rate by 1. But we can *keep differentiating* the exponential function [tex]e^x[/tex] (there’s even a classic joke about it), so the growth rate of [tex]e^x[/tex] must be *so big* that we could never “knock it down” to 0 with any finite number of subtractions!

The reasonable course of action, then, is to break out the surreal numbers and say that [tex]e^x[/tex] has growth rate [tex]\omega[/tex]. Conway and Guy give the numerical growth rates for a whole slew of functions: [tex]\sqrt{x}[/tex] has rate [tex]\frac{1}{2}[/tex], [tex]xe^x[/tex] has rate [tex]\omega + 1[/tex], [tex]e^{2x}[/tex] has rate [tex]2\omega[/tex], and — puzzle on this — [tex]\log x[/tex] has rate [tex]\frac{1}{\omega}[/tex].

(Remember, the natural logarithm is the inverse of the exponential, and it seems reasonable that the composition of functions with growth rates [tex]m[/tex] and [tex]n[/tex] should have growth rate [tex]mn[/tex]. Of course, “reasonable” becomes an interesting concept when dealing with transfinite numbers.)

Conway and Guy credit a certain “Paul Dubois-Raymond” for this idea of representing growth rates as individual numbers. This should actually be Paul du Bois-Reymond (1831–1889), a German mathematician of the GauÃŸ lineage who discovered, among other things, a continuous function whose Fourier series diverged at a point (in fact, at a dense set of points). In 1875, he gave a description of a function which is everywhere continuous but nowhere differentiable, following up some work WeierstraÃŸ had done earlier but not yet published. That same year, he discovered what was essentially Cantor’s “diagonal argument,” proving that there exist more real numbers than rational numbers; Cantor had deduced this fact in 1874, but did not find the diagonalization proof until 1891 (probably without knowing of du Bois-Reymond’s work).

His *Die allgemeine Functionentheorie* (The General Function-Theory, 1882) is notable for claiming that there exist many mathematical results, including important ones, which mathematicians will never be able to prove true or false.

The work which Conway and Guy credit is his “*Ueber die Paradoxen des InfinitÃ¤r-CalcÃ¼ls*” [“On the paradoxes of the infinitary calculus”], *Math. Annalen* **11** (1877), 150–167. This paper was apparently reviewed in G. H. Hardy’s *Orders of Infinity* (1910). It took me longer than strictly necessary to find all this, just because Conway and Guy got the fellow’s name wrong!