When calculus books state that 0^{0} is an indeterminate form, they mean that there are functions f(x) and g(x) such that f(x) approaches 0 and g(x) approaches 0 as x approaches 0, and that one must evaluate the limit of [f(x)]^{g(x)} as x approaches 0. But what if 0 is just a number? Then, we argue, the value is perfectly welldefined, contrary to what many texts say. In fact, 0^{0} = 1!
When calculus books state that 0^{0} is an indeterminate form, they mean that there are functions f(x) and g(x) such that f(x) approaches 0 and g(x) approaches 0 as x approaches 0, and that one must evaluate the limit of [f(x)]^{g(x)} as x approaches 0. But what if 0 is just a number? Then, we argue, the value is perfectly welldefined, contrary to what many texts say. In fact, 0^{0} = 1!
Pick up a high school mathematics textbook today and you will see that 0^{0} is treated as an indeterminate form. For example, the following is taken from a current New York Regents text [6]:
We recall the rule for dividing powers with like bases:
x^{a}/x^{b} = x^{ab} (x not equal to 0) (1) If we do not require a > b, then a may be equal to b. When a = b:
x^{a}/x^{b} = x^{a}/x^{a} = x^{aa } = x^{0}
(2) but
x^{a} / x^{a} = 1 (3) Therefore, in order for x^{0} to be meaningful, we must make the following definition:
Since the definition x^{0} = 1 is based upon division, and division by 0 is not possible, we have stated that x is not equal to 0. Actually, the expression 0^{0} (0 to the zero power) is one of several indeterminate expressions in mathematics. It is not possible to assign a value to an indeterminate expression.
x^{0} = 1 (x not equal to 0) (4)
Calculus textbooks also discuss the problem, usually in a section dealing with L'Hospital's Rule. Suppose we are given two functions, f(x) and g(x), with the properties that \(\lim_{x\rightarrow a} f(x)=0\) and \(\lim_{x\rightarrow a} g(x)=0.\) When attempting to evaluate [f(x)]^{g(x)} in the limit as x approaches a, we are told rightly that this is an indeterminate form of type 0^{0} and that the limit can have various values of f and g. This begs the question: are these the same? Can we distinguish 0^{0} as an indeterminate form and 0^{0} as a number?
The treatment of 0^{0} has been discussed for several hundred years. Donald Knuth [7] points out that an Italian count by the name of Guglielmo Libri published several papers in the 1830s on the subject of 0^{0} and its properties. However, in his Elements of Algebra, (1770) [4], which was published years before Libri, Euler wrote,
As in this series of powers each term is found by multiplying the preceding term by a, which increases the exponent by 1; so when any term is given, we may also find the preceding term, if we divide by a, because this diminishes the exponent by 1. This shews that the term which precedes the first term a^{1} must necessarily be a/a or 1; and, if we proceed according to the exponents, we immediately conclude, that the term which precedes the first must be a^{0}; and hence we deduce this remarkable property, that a^{0} is always equal to 1, however great or small the value of the number a may be, and even when a is nothing; that is to say, a^{0} is equal to 1.
More from Euler: In his Introduction to Analysis of the Infinite (1748) [5], he writes :
Let the exponential to be considered be a^{z} where a is a constant and the exponent z is a variable .... If z = 0, then we have a^{0} = 1. If a = 0, we take a huge jump in the values of a^{z}. As long as the value of z remains positive, or greater than zero, then we always have a^{z} = 0. If z = 0, then a^{0} = 1.
Euler defines the logarithm of y as the value of the function z, such that a^{z} = y. He writes that it is understood that the base a of the logarithm should be a number greater than 1, thus avoiding his earlier reference to a possible problem with 0^{0}.
Defining powers is often carelessly done. Almost thirty years before Libri's first paper, George Baron published "A short Disquisition, concerning the Definition, of the word Power, in Arithmetic and Algebra" in The Mathematical Correspondent (1804). In this paper [1], Baron begins the discussion with the following definition:
The powers of any number, are the successive products, arising from unity, continually multiplied, by that number.
As an example, he writes that 1 × 5 = 5, which is the first power of 5, and 1 × 5 × 5 = 25, which is the second power of 5, etc. The first, second, etc., powers are then conveniently expressed as 5^{1}, 5^{2}, etc. In the same manner, the powers of any number x might be represented as x^{1}, x^{2}, etc., in which x^{1} = 1 × x, x^{2} = x^{1} × x, etc. After stating a few corollaries, Baron writes:
Let us, therefore, next inquire, whether the same definition, will not lead us to a clear and intelligible solution, of the mysterious paradoxes, resulting from the common definition, when applied, to what is denominated, the nothingth power of numbers.
Baron then addresses the rules for dividing powers (look back to the argument from the high school text), but he develops a different conclusion:
If the multiplication by x, be abstracted from the first power of x, by means of division; the power will become nothing but the unit will remain: for \(\frac{x^1}{x} = \frac{1\times x}{x} =1,\) and hence it is plain that x^{0} = 1, when x represents any number whatever. But since the number x, is here unlimited with regard to greatness, it follows, that, the nothingth power of an infinite number is equal to a unit.
Baron gives credit to both William Emerson (1780) [3] and Jared Mansfield (1802) [9] who wrote on the subject of "nothing." Baron takes their arguments one step further and postulates that the number x can be any number, great or small:
To pursue the application of our definition, to quantity in the ultimate extremity of smallness, let us suppose x to represent any fractional quantity; or in other words, let x denote any magnitude, expressed in numbers, by means of some part of its measuring unit: then by the definition x^{1} = 1 × x. Let now this multiplication by x, be abstracted; and for the reasons heretofore advanced, we have x^{0} = 1. Now since x here represents a fractional quantity, independent of any limitation, in respect to smallness; we may therefore suppose x, by means of continual diminution, or decrease, to pass from its present value, through every degree of smallness, until it become nothing; then it will be evident, that, during this diminution or decrease of x, x^{0} will continue equal to an invariable unit; and that precisely at the instant, when x becomes nothing, x^{0}, or 0^{0} = 1.
Baron never mentions the term indeterminate form, and he in fact ends his treatise with the following:
Also, since x^{0} = 1, whatever be the value of x; of consequence; in every system of logarithms, the logarithm of 1 = 0.
According to Knuth, Libri's 1833 paper [8] "did produce several ripples in mathematical waters when it originally appeared, because it stirred up a controversy about whether 0^{0} is defined." Most mathematicians at the time agreed that 0^{0} = 1, even though AugustinLouis Cauchy had listed 0^{0} in a table of undefined forms in his book entitled Cours D'Analyse (1821) [2]. Evidently, Libri's argument was not convincing, so August Möbius came to his defense. Möbius tried to defend Libri by presenting a supposed proof of 0^{0} = 1 (in essence, a proof that \(\lim_{x\rightarrow {0^+}} x^x=1\)). After confrontations from another mathematician resulted, the paper "was quietly omitted from the historical record when the collected works of Möbius were ultimately published." Knuth goes on to write that the debate ended with the result that 0^{0} should be undefined, and then he states,
"No, no, ten thousand times no!"
Perhaps Cauchy was developing the notion of 0^{0} as an undefined limiting form. Then the limiting value of [f(x)]^{g(x)} is not known a priori when each of f(x) and g(x) approach 0 independently. According to Knuth, "the value of 0^{0} is less defined than, say, the value of 0 + 0." He reminds us to recall the binomial theorem:
\[(x + y)^n = \sum_{k=0}^n {n \choose k} x^k y^{nk}.\]
If this theorem is to hold for at least one nonnegative integer, then mathematicians "must believe that 0^{0} = 1," for we can plug in x = 0 and y = 1 to get 1 on the left and 0^{0} on the right.
In 1970, Herbert Vaughan [10] argued for the explicit recognition of evaluating 0^{0} = 1. He aimed to show "that there is a good deal of motivation for defining '0^{0}' to be a numeral for 1." He provided three examples.
Example 1. Vaughan gave the infinite geometric progression
\[\sum_{n=1}^{\infty} x^{n1} = \frac{1}{1x} \mbox{ for }  x  < 1.\]

(6) 
If \(x = 0,\) then \(\vert x\vert = \vert 0\vert < 1,\) which leads to
\[\sum_{n=1}^{\infty} 0^{n1} = \frac{1}{10} = 1.\]

(7) 
The infinite sum can be expanded as 0^{0} + 0^{1} + 0^{2} + … = 1. As stated by Vaughan, if 0^{0} is not defined, this summation is senseless. Further, if 0^{0} ≠ 1, then the summation is false.
Example 2. This example arises from the infinite summation for e^{x}, which can be written as
\[\sum_{n=1}^{\infty} \frac{x^{n1}}{(n1)!} = e^x \mbox{, for all } x.\]

(8) 
Everyone agrees that 0! = 1, so in the case where x = 0, the sum becomes
\[\sum_{n=1}^{\infty} \frac{0^{n1}}{(n1)!} = e^0 = 1.\]

(9) 
The sum can be expanded as
\[\frac{0^0}{0!} + \frac{0^1}{1!} + \frac{0^2}{2!} + \cdots = \frac{0^0}{1} + 0 + 0 + \cdots = 0^0.\]

(10 
The righthandside of the summation is e^{0} = 1, so 0^{0} = 1.
Example 3. A third example given by Vaughan involves the cardinal number of a set of mappings. In set theory, exponentiation of a cardinal number is defined as follows:
a^{b} is the cardinal number of the set of mappings of a set with b members into a set with a members.
For instance, 2^{3} = 8 because there are eight ways to map the set { x, y, z } into the set { a, b }. In order to calculate 0^{0}, determine the number of mappings of the empty set into itself. There is precisely one such mapping, which is itself the set of the empty set. "So, as far as cardinal numbers are concerned," wrote Vaughan, "0^{0} = 1."
When might a mathematician want 0^{0} to be something that is not indeterminate? If, for example, we are discussing the function f(x, y) = x^{y}, the origin is a discontinuity of the function. No matter what value may be assigned to 0^{0}, the function x^{y} can never be continuous at x = y = 0. Why not? The limit of x^{y} along the line x = 0 is 0, but the limit along the line y = 0 is 1, not 0. For consistency and usefulness, a "natural" choice would be to define 0^{0} = 1.
In keeping with the honored pedagogical technique of "First tell 'em what you are going to tell 'em, then tell 'em, then tell 'em what you told 'em," we summarize. If you are dealing with limits, then 0^{0} is an indeterminate form, but if you are dealing with ordinary algebra, then 0^{0} = 1.