The Journal of Online Mathematics and Its Applications, Volume 6 (2006)

The Chebyshev Equioscillation Theorem, Robert Mayans

The proof of the Chebyshev theorem does not produce a construction of the polynomial of best approximation. Indeed, a general algorithm to produce the best approximation for any continuous function is not feasible, because we would require an infinite amount of input data to specify an arbitrary continuous function. Still, there are methods to find the polynomial of best approximation for many cases, and these methods make critical use of the Chebyshev theorem.

For some very simple cases, *ad hoc* methods suffice to find the polynomial of best approximation. For example, we seek the best quadratic approximation to the function
`y` = `f`(`x`) = sin^{2}(`x`)
on the interval
[−`π` / 2, `π` / 2].

By examining the graph of
`y` = sin^{2}(`x`),
it looks plausible that a good quadratic approximation
`p`(`x`)
would be a parabola symmetric on the `y`-axis, which overestimates
sin^{2}(`x`)
at
`x` = 0
and
`x` = `a` = `π` / 2,
and most underestimates
sin^{2}(`x`)
at a point `z` near 1.

The Chebyshev theorem tells us that if we can find a quadratic that make these three errors equal and maximal, then we've found the best approximation, because we have a five-point alternating set of
(−`π` / 2, −`z`, 0, `z`, `π` / 2).
The solution is not hard: writing
`p`(`x`) = `b` + `c``x`^{2}
and `E` for the maximum error, we must solve:

- sin
^{2}(0) −`p`(0) = −`E` - sin
^{2}(`π`/ 2) −`p`(`π`/ 2) = −`E` - sin
^{2}(`z`) −`p`(`z`) =`E`, where`z`is the root of near 1.

So
`b` = `E`,
`c` = (2 / `π`)^{2},
and `z` is the root of
sin(2`x`) − 2`c``x` = 0
near 1. By Newton's iteration, we find that
`z` = 1.056612311,
and basic calculus techniques show that these five points
0, ± z, ± `π` / 2
are the only maxima and minima of
cos^{2}(`x`) − `p`(`x`)
on the interval
[`π` / 2, `π` / 2].
Thus
`p`(`x`) = 0.405284735`x`^{2} + 0.152818379,
and the maximum error is
`E` = 0.152818379.
Note that since the alternating set has length 5, this is also the best polynomial approximation of degree
≤ 3.
Other simple examples may be found in Atkinson (1989).

A more general method for finding the polynomial of best approximation is the Remes algorithm. This is a family of algorithms that searches for the best polynomial by searching for alternating sets. Here is an outline, as defined in Atkinson (1978); see also the article on the Remes Algorithm at MathWorld:

Given a sequence of
`n` + 2
nodes
`x`_{0}, ..., `x`_{n + 1},
solve the linear system:

This is a system of
`n` + 2
equations, linear in
`a`_{0}, `a`_{1}, ..., `a`_{n},
and `E`. The solution defines a new polynomial
`p`(`x`) = `a`_{0} + `a`_{1} `x`^{1} + ··· + `a`_{n} `x`^{n}
that approximates `f`.

Find a new non-uniform alternating set
`z`_{0}, `z`_{1}, ..., `z`_{n + 1}
of length
`n` + 2,
where each
`z`_{i}
is a local maxima or minima, and where
`f`(x_{i}) − `p`(`x`_{i})
and
`f`(`x`_{i}) − `p`(z_{i})
have the same sign. Also, exchange nodes if needed so that the point of maximum error is included in the
`z`_{i}'s.
Finally, we rename the `z`'s to `x`'s, and return to Step 1.

In practice, the Remes algorithm converges quickly if the initial approximation is sufficiently close. For a full discussion, see Meinardus (1967).

The following applet implements a version of the Remes algorithm to find the polynomial of best approximation. As in a previous applet, you may either select a predefined function, or sketch a function by dragging the mouse from left to right over the plotting area. Select the degree of the polynomial. By pressing **Next** button twice. The first time, the applet shows the points selected for the next non-uniform alternating set. The second time, the new polynomial and error function are calculated and displayed. The iteration halts if the approximation is exact, or if the program cannot find an alternating set, or if the linear system is singular or nearly singular, or if the norm of the error grows to more than five times the initial error.