Volume 6. August 2006. Article ID 1297

- Introduction
- Chebyshev Polynomials
- Continuous Chebyshev Expansion
- Discrete Chebyshev Expansion
- Rates of Convergence
- Filters
- Current Research Areas
- Further Explorations
- Summary
- References

Most areas of numerical analysis, as well as many other areas of mathematics as a whole, make use of the Chebyshev polynomials. In several areas, e.g. polynomial approximation, numerical integration, and pseudospectral methods for partial differential equations, the Chebyshev polynomials take a significant role. In fact, the following quote has been attributed to a number of distinguished mathematicians:

"The Chebyshev polynomials are everywhere dense in numerical analysis."

In this article we use Java applets to interactively explore some of the classical results on approximation using Chebyshev polynomials. We also discuss an active research area that uses the Chebyshev polynomials. Mason and Handscomb (2003) and Rivlin (1974) are devoted to the Chebyshev polynomials and may be consulted for more detailed information than we provide in this brief presentation. The Chebyshev polynomials are named for *Pafnuty Chebyshev*. You can read a brief biography of Chebyshev at Wikipedia

The article uses four applets:

- Chebyshev Polynomial (CP) applet
- Chebyshev Approximation (CA) applet
- Runge Phenomenon (RP) applet
- Exponential Filter (EF) applet

The CP applet and the CA applet are used frequently and thus open in separate windows that you can keep open as you read the text. The CA applet window also gives instructions for using the applet and definitions of the functions used in the applet.

The *Chebyshev Polynomials* (of the first kind) are defined by as

(1) | $${T}_{n}(x)=\mathrm{cos}\left[n\mathrm{arccos}(x)\right]$$ |

They are orthogonal with respect to the weight
$w(x)={\left(1-{x}^{2}\right)}^{-1/2}$
on the interval
$\left[-1,1\right]$
.
Intervals
$\left[a,b\right]$
other than
$\left[-1,1\right]$
are easily handled by the change of variables
$x\to \frac{1}{2}\left[(b-a)x+a+b\right]$
.
Although not immediately evident from definition (1), `T`_{n} is a polynomial of degree `n`. From definition (1) we have that
${T}_{0}(x)=\mathrm{cos}(0)=1$
and
${T}_{1}(x)=\mathrm{cos}\left(\mathrm{arccos}(x)\right)=x$
.

**Exercise.** Use basic trig identities to establish the triple recursion relation

(2) | $${T}_{n+1}(x)=2x{T}_{n}(x)-{T}_{n-1}(x),\text{\hspace{1em}}n=1,2,\dots $$ |

Using equation (2) we see

$$\begin{array}{c}{T}_{2}(x)=2x(x)-1=2{x}^{2}-1\\ {T}_{3}(x)=2x\left(2{x}^{2}-1\right)-x=4{x}^{3}-3x\\ {T}_{4}(x)=2x\left(4{x}^{3}-3x\right)-\left(2{x}^{2}-1\right)=8{x}^{4}-8{x}^{2}+1\\ \vdots \end{array}$$and that the Chebyshev polynomial `T`_{n} is indeed a polynomial of degree `n`.

What do the Chebyshev polynomials look like? The Chebyshev polynomials of degree `n` = 0, 1, ..., 12 can be plotted in the CP applet. Move the slider to change the degree. Notice that
$\left|{T}_{n}(x)\right|\le 1$.
Since `T`_{n} is a degree `n` polynomial we can observe as expected that it has `n` zeros, which in this case are real and distinct and located in
$\left[-1,1\right]$.

**Exercise.** Show that the zeros of `T`_{n} are

(3) | $${x}_{k}=\mathrm{cos}\left(\frac{\pi (2k+1)}{2n+2}\right),\text{\hspace{1em}}k=0,1,\dots ,n$$ |

The zeros are known as the Chebyshev-Gauss (CG) points.

The infinite continuous Chebyshev series expansion is

(4) | $$f(x)\approx \sum _{n=0}^{\infty}\text{'}{\alpha}_{n}{T}_{n}(x)$$ |

where

(5) | $${\alpha}_{n}=\frac{2}{\pi}{\int}_{-1}^{1}{\left(1-{x}^{2}\right)}^{-1/2}f(x){T}_{n}(x)dx$$ |

The single prime notation in the summation indicates that the first term is halved. Truncating the series after `N` + 1 terms, we get the truncated continuous Chebyshev expansion:

(6) | $${S}_{N}(x)=\sum _{n=0}^{N}\text{'}{\alpha}_{n}{T}_{n}(x)$$ |

There are several functions in which the integral for the coefficients
${\alpha}_{n}$
can be evaluated explicitly, but this is not possible in general. Examples included in the CA applet for which a continuous truncated expansion can be derived are the sign function `f`_{1}, the square root function `f`_{4}, and the absolute value function `f`_{5} (open the applet window to review the definitions of these functions).

The conditions which must be placed on `f` to ensure the convergence of the series (4) depend on the type of convergence to be established: pointwise, uniform, or `L`^{2}. At the lowest level, the series (4) converges pointwise to `f` at points where `f` is continuous in
$\left[-1,1\right]$
and converges to the left and right limiting values of `f` at any of a finite number of jump discontinuities in the interior of the interval.

The sign function in the CA applet has a jump discontinuity at `x`_{0} = 0 and has the limiting values on each side of the discontinuity of
$f\left({x}_{0}^{+}\right)=1$
and
$f\left({x}_{0}^{-}\right)=-1$.
Thus the series converges to zero at this point, i.e.

for sufficiently large `N`. In the applet select the sign function from the Functions menu and check the blue continuous, S option on the Approximation menu. Using the slider at the bottom of the applet, slowly adjust `N` from `N` = 7 to `N` = 128 and observe that the value of `S`_{N}(0) is approximately zero.

**Exercise.** Show that if `f` is an even function then
${\alpha}_{k}=0$
for `k` = 1, 3, 5, ... If `f` is an odd function then
${\alpha}_{k}=0$
for `k` = 0, 2, 4, ....

The result in the last exercise can be observed in the truncated continuous expansion of
${f}_{4}(x)=\sqrt{1-{x}^{2}}$
and
${f}_{5}(x)=\left|x\right|$
(even) and
${f}_{1}(x)=\mathrm{sign}(x)$
(odd) in the CA applet. For example, select the even function `f`_{4} which is labeled as *sqrt* on the Functions menu and select the blue *continuous*, `S` option on the Approximation menu. Then on the Options menu check *plot coefficients* and using the slider slowly adjust `N` from `N` = 7 to `N` = 21. In the right window observe that
${\alpha}_{k}=0$
for `k` = 1, 3, 5, .... The magnitude of the coefficients can also be viewed with the `y`-axis scaled logarithmically (*semiLogY* on the Options menu). However, in this case the coefficients which are zero are not plotted as log(0) is undefined.

When the integral in (5) can not be evaluated exactly, we can introduce a discrete grid and use a numerical quadrature (integration) formula. Several possible grids, and related quadrature formulas exist. The Chebyshev-Gauss-Lobatto (CGL) points

(7) | $${x}_{k}=-\mathrm{cos}\left(\frac{k\pi}{N}\right),\text{\hspace{1em}}k=0,1,\dots ,N$$ |

are a popular choice of quadrature points. The CGL points are where the $n-1$ extrema of ${T}_{n}(x)$ occur plus the endpoints of the interval $\left[-1,1\right]$.

Using the CP applet, observe how the extrema of the Chebyshev polynomials are not evenly distributed and how they cluster around the boundary. In the CA applet, the CGL points may be plotted by checking *plot CGL points* on the Options menu. Try this with the sign function starting with `N` = 9 and then with increasing `N`.

**Exercise.** Show that
${T}_{n}(x)=\pm 1$
at the
$n-1$
CGL points.

The corresponding CGL quadrature formula is

(8) | $${\int}_{-1}^{1}\frac{f(x)}{\sqrt{1-{x}^{2}}}dx=\frac{\pi}{N}\sum _{j=0}^{N}\text{'}\text{'}f\left({x}_{k}\right)$$ |

The double prime notation in the summation indicates that the first and last terms are halved. If `f` is a polynomial of degree less than or equal to
${S}_{N}\left({x}_{0}\right)\approx \frac{1}{2}\left[f\left({x}_{0}^{+}\right)+f\left({x}_{0}^{-}\right)\right]$,
the CGL quadrature formula is exact. This is remarkable accuracy considering that the values of the integrand are only known at the `N` +1 CGL points. Using the CGL quadrature formula to evaluate the integral in (5), the discrete Chebyshev coefficients
${\alpha}_{n}$
are defined to be

(9) | $${\alpha}_{n}\approx {a}_{n}=\frac{2}{N}\sum _{k=0}^{N}\text{'}\text{'}f\left({x}_{k}\right){T}_{n}\left({x}_{k}\right)$$ |

and the discrete truncated partial sum is

(10) | $${P}_{N}(x)=\sum _{n=0}^{N}\text{'}{a}_{n}{T}_{n}(x)$$ |

Using definition (9) takes
$O\left({N}^{2}\right)$
floating point operations (flops) to evaluate the discrete Chebyshev coefficients. For large `N`, a better choice is the fast cosine transform (FCT) (Briggs and Henson, 1995) which takes
$O\left(N{\mathrm{log}}_{2}N\right)$
flops. For example, if `N` = 1000,
${N}^{2}=\mathrm{1,000,000}$
while
$N{\mathrm{log}}_{2}N<\mathrm{10,000}$.
The extreme efficiency of the FCT is one reason for the popularity of Chebyshev approximations in applications.

Requiring that the approximation be interpolating, i.e., requiring that it satisfy

(11) | ${P}_{N}\left({x}_{i}\right)=f\left({x}_{i}\right)\text{\hspace{1em}}i=0,1,\dots ,N$ |

we get the interpolating partial sum

(12) | $${I}_{N}(x)=\sum _{n=0}^{N}\text{'}\text{'}{a}_{n}{T}_{n}(x)$$ |

The interpolating partial sum would be equal to the truncated series with the coefficients approximated via CGL quadrature except the last coefficient is halved. This is due to the choice of quadrature points. If Gaussian quadrature, which uses the Chebyshev-Gauss (CG) points, had been used instead of CGL quadrature, the interpolating and discrete truncated partial sum would be identical. The CG points are the zeros of `T`_{n} and do not include
$x=\pm 1$.
Chebyshev pseudospectral methods for solving PDEs usually incorporate the CGL points and not the CG points. The reason for this is that the discrete grid must include the boundary points so that the boundary conditions of the PDE can be incorporated into the numerical approximation.

Using the CA applet, we can observe the difference between `S`_{N}, `P`_{N}, and `I`_{N}. For example if we use the sign function (select *sign* from the Functions menu) with `N` = 11 (set `N` using the slider at the bottom of the applet) and plot the CGL points (check *plot CGL points* on the Options menu) we see that `I`_{N} goes through the interpolation sites while `S`_{N} and `P`_{N} do not (On the Approximations menu, select the blue *interpolation, I* and then the red

Since (12) is a polynomial of at most degree `N` that satisfies the interpolation condition (11) at `N` + 1 distinct points, a standard result from numerical analysis tells us that `I`_{N} is the unique interpolating polynomial (see Burden and Faires (1995), p. 106). The interpolating polynomial may be written in several equivalent forms: Lagrange, Newton, and Barycentric. For information on the merits of each form, see Berrut and Trefethen (2004). The Lagrange form of the interpolating polynomial is

where

(13) | $${L}_{k}(x)=\prod _{i=0,i\ne k}^{N}\frac{x-{x}_{i}}{{x}_{k}-{x}_{i}}$$ |

are cardinal polynomials that satisfy

(14) | $${L}_{k}\left({x}_{i}\right)=\{\begin{array}{l}1,\text{\hspace{0.28em}}i=k\\ 0,\text{\hspace{0.28em}}i\ne k\end{array}$$ |

The Lagrange form gives an error term of the form

(15) | $${e}_{N}(x)=f(x)-{I}_{N}(x)=\frac{{f}^{(N+1)}\left(\xi (x)\right)}{(N+1)!}\Psi (x)$$ |

where

(16) | $$\Psi (x)=\prod _{j=0}^{N}\left(x-{x}_{j}\right)$$ |

The underlying function `f`(`x`) is often unknown and the number
$\xi $
is only known in simple examples. Thus,
$\Psi (x)$
is the only part of the error term which can be controlled. By using the CG or CGL points as interpolation cites,
$\Psi (x)$
is made nearly as small as possible (see Burden and Faires (2005), p. 507). On the other hand, it is well known that polynomial interpolation in equally spaced points can be troublesome. The classic example provided by Runge is the function

(17) | $$f(x)=\frac{1}{1+{x}^{2}},\text{\hspace{1em}}-5\le x\le 5$$ |

For the function (17), equidistant polynomial interpolation diverges for $\left|x\right|\ge 3.63$. By using the CGL points (7), which cluster densely around the endpoints of the interval, as interpolation sites the nonuniform convergence (the Runge Phenomenon) associated with equally spaced polynomial interpolation is avoided.

The RP applet below illustrates equidistant and Chebyshev interpolation for the Runge example (17). The applet starts with `N` = 15 and equidistant interpolation. Use the slider to increase `N` and observe that the oscillations near the boundary become larger and that the approximation is good for |`x`| < 3.63. Select the CGL button at the top of the applet and observe that the oscillations near the boundary disappear.

Introducing a discrete grid leads to aliasing. The discrete coefficients can be expressed in terms of the continuous coefficients as

(18) | $${a}_{n}={\alpha}_{n}+\sum _{j=1}^{\infty}\left({\alpha}_{n+2jN}+{\alpha}_{-n+2jN}\right)$$ |

As an example consider the `sign` function with `N` = 9. The difference between the discrete coefficient `a`_{5} and the continuous coefficient
${\alpha}_{5}$
can be quantified by the aliasing relation (18) as

This relation is a result of the fact that on the discrete grid, `T`_{5} is identical to `T`_{23}, `T`_{41}, `T`_{59}, ... and also to `T`_{13}; `T`_{31}; `T`_{49}, ... as is illustrated in Figure 1.

Figure 1. On the CGL grid (open black circles) for `N` = 9, `T`_{5} is identical to `T`_{13} (green)

and `T`_{23} (cyan). Points of intersection on the CGL grid are marked with red *'s.

The image was produced with the following Matlab script:

N = 9; M = 600; x = -cos(pi*(1:9)./N); % extrema and endpoints of T10 xp = linspace(-1,1,M); T23 = cos(23*acos(xp)); % cyan T13 = cos(13*acos(xp)); % green T5 = cos( 5*acos(xp)); % blue T5g = cos( 5*acos(x)); % T_5(x) (red *) XGL10 = zeros(1,length(x)); % CGL pts (open black circles) plot(xp,T5,'b',xp,T13,'g',x,T5g,'r*',x,XGL10,'k-o',xp,T23,'c') xlabel 'x', ylabel 'T'

In the CA applet, observe the difference between the odd numbered coefficients of the `S`_{9}, `P`_{9} and `I`_{9} approximations of the sign function (select *sign* from the Functions menu and set `N` = 9 using the slider at the bottom of the applet). On the Approximations menu, select the blue *interpolation, I* and then select the red *continuous, S*. On the Approximations menu select *plot coefficients*. There is no difference in the even numbered coefficients, as the sign function is odd. Thus the continuous even coefficients that are involved in the aliasing relation are all zero. The difference in the odd coefficients is due to aliasing. Make similar comparison with the truncated discrete series by selecting the blue *discrete, P* from the approximations. Again there is a difference in the odd coefficients that is due to aliasing.

Now compare the two discrete approximations, `I`_{9} (blue *interpolation, I*) and `P`_{9} (red *discrete, P*). The coefficients are identical, but the approximations are different due to
${\alpha}_{9}$
being halved in the interpolating approximation but not in the truncated series.

Repeatedly integrating equation (5) by parts we get

(19) | $${\alpha}_{n}=\frac{1}{{n}^{m}}\frac{2}{\pi}{\int}_{-1}^{1}{\left(1-{x}^{2}\right)}^{-1/2}{f}^{(m)}(x){T}_{n}(x)dx$$ |

Thus, if `f` is `m`-times
($m\ge 1$)
continuously differentiable in
$\left[-1,1\right]$
the above integral will exist and we can conclude that

(20) | $${\alpha}_{n}=O\left({n}^{-m}\right),\text{\hspace{1em}}n=1,2,\dots $$ |

If we make a careful choice of which definition of the integral to use, the same result can be shown to be true if `f` is
($m-1$)-times
differentiable a.e. (almost everywhere) in
$\left[-1,1\right]$
with its
($m-1$)'th
derivative of bounded variation in
$\left[-1,1\right]$.

Since the absolute value of each `T`_{k} is bounded above by 1 on
$\left[-1,1\right]$
,
it follows that the truncation error for the continuous expansion is bounded by the sum of the absolute value of the neglected coefficients:

(21) | $$\left|f(x)-{S}_{N}(x)\right|\le \sum _{n=N+1}^{\infty}\left|{\alpha}_{n}\right|$$ |

A similar bound, with an additional factor of two, holds for the interpolating partial sum:

(22) | $$\left|f(x)-{I}_{N}(x)\right|\le 2\sum _{n=N+1}^{\infty}\left|{\alpha}_{n}\right|,\text{\hspace{1em}}x\in [-1,1]$$ |

From (20), (21), and (22) we conclude that

(23) | $$\left|f(x)-{S}_{N}(x)\right|=O\left({N}^{-m}\right)$$ |

and

(24) | $$\left|f(x)-{I}_{N}(x)\right|=O\left({N}^{-m}\right)$$ |

If f is infinitely differentiable the convergence is faster than
$O\left({N}^{-m}\right)$
no matter how large we take `m`. This is commonly termed spectral accuracy or exponential accuracy. If `f` can be extended to an analytic function in a suitable region of the complex plane, the pointwise error on
$\left[-1,1\right]$
can be shown to be

(25) | $$O\left({r}^{-N}\right)$$ |

for some `r` > 1 (Mason and Handscomb (2003)). In Figure 2 the rather slow decay rate of the error with increasing `N` is illustrated for the absolute value function `f`_{5} for which `m` = 1. This can be contrasted with the rapid spectral convergence of the infinitely smooth function `f`_{2}. Notice that the decay of error for the smooth function ceases at about `N` = 140. This is due to the accuracy of the representation of floating point numbers on the computer which limits accuracy to about 14 or 15 decimal places.

Figure 2. Convergence of an infinitely differentiable function
${f}_{2}(x)=\mathrm{exp}\left(\mathrm{cos}\left(8{x}^{3}+1\right)\right)$

versus convergence of a continuous function
${f}_{5}(x)=\left|x\right|$.

No matter what rate of decay the coefficients have, the convergence rate is only observed for `n` > `n`_{0}. Using an approximation with fewer than `n`_{0} terms may result in a very bad approximation. For example, the decay rate of the coefficients of the infinitely smooth function in the applet is not yet evident for `N` = 17 and the approximation is very poor.

Equation (19) allows us to conclude that if `f` is a polynomial of degree `N`, then
${\alpha}_{n}=0$
for all `n` > `N` since
${f}^{(n)}(x)=0$
for `n` > `N`. In the CA applet select the *7th degree polynomial* from the Functions menu. Use the slider at the bottom of the applet to set `N` to 9. From the Options menus check *plot coefficients* and *semiLogY*. Observe that
${\alpha}_{n}=0$
(to within machine precision) for `n` > 7.

If `m` = 0, i.e., `f` is discontinuous, the accuracy of the Chebyshev approximation methods reduces to `O`(1) near the discontinuity. Sufficiently far away from the discontinuity, the convergence will be slowed to
$O\left({N}^{-1}\right)$.
Oscillations will be present near the discontinuity and they will not diminish as
$N\to \infty $.
Additionally, the oscillations will not even be localized near a discontinuity. This situation is referred to as the Gibbs phenomenon. A nice history of the Gibbs phenomenon can be found in Hewitt and Hewiit (1979).

In the CA applet, select the *sign* function from the Functions Menu. From the Options menus uncheck *plot coefficients* and check *semiLogY*. Use the slider at the bottom of the applet to slowly change `N` from 10 to 256. Observe that the maximum amplitude of the overshoot at the discontinuity does not decrease with increasing `N`. Observe that sufficiently far away from the discontinuity that the oscillations are slowly decaying. Now check plot coefficients on the Options menu and again use the slider at the bottom of the applet to slowly change `N` from 10 to 256. Notice that the coefficients are decaying, but at a very slow rate. Spectral convergence has been lost due to the discontinuity. Select the *smooth* function from the Functions menu and compare how fast the coefficients of this function decay compared to the sign function.

Spectral filters can be used to enhance the decay rate of the Chebyshev coefficients (Vandeven (1992)) and to lessen the effects of the Gibbs phenomenon. The filtered Chebyshev approximation is

(26) | $${F}_{n}(x)=\sum _{n=0}^{N}\sigma \left(\frac{n}{N}\right){a}_{n}{T}_{n}(x)$$ |

where
$\sigma $
is a spectral filter. A `p`th (`p` > 1) order spectral filter is defined as a sufficiently smooth function satisfying

(27) | $$\sigma (0)=1$$ |

(28) | $${\sigma}^{(m)}(0)=0,\text{\hspace{1em}}m=1,2,\dots ,p-1$$ |

(29) | $${\sigma}^{(m)}(1)=0,\text{\hspace{1em}}m=0,1,\dots ,p-1$$ |

The convergence rate of the filtered approximation is determined solely by the order of the filter and the regularity of the function away from the point of discontinuity.

If `p` is chosen increasing with `N`, the filtered expansion recovers exponential accuracy away from a discontinuity. Assuming that `f` has a discontinuity at `x`_{0} and setting
$d(x)=x-{x}_{0}$,
the estimate

(30) | $$\left|f(x)-{F}_{N}(x)\right|\le \frac{K}{d{(x)}^{p-1}{N}^{p-1}}$$ |

holds where `K` is a constant. If `p` is sufficiently large, and `d`(`x`) not too small, the error goes to zero faster than any finite power of `N`, i.e. spectral accuracy is recovered. When `x` is close to a discontinuity the error increases. If `d`(`x`) = `O`(1/`N`) then the error estimate is `O`(1).

Many different filter functions are available, but perhaps the most versatile and widely used filter is the exponential filter

(31) | $$\sigma (\omega )=\mathrm{exp}\left(-\alpha {\omega}^{2p}\right),\text{\hspace{1em}}p=1,2,\dots $$ |

of order 2`p`. In order for condition (29) to be satisfied, the parameter
$\alpha $
is taken as
$\alpha =-\mathrm{ln}\left({\epsilon}_{m}\right)$
where
${\epsilon}_{m}$
is defined as machine zero. On a 32-bit machine using double precision floating point operations,
${\epsilon}_{m}={2}^{-52}\approx 2.2204\times {10}^{-16}$
and
$\mathrm{ln}\left({\epsilon}_{m}\right)\approx -36.0437$.

The exponential filter is implemented in the CA applet. The default order of the filter is 4 (`p` = 2). Select the *sign* function from the Functions menu. From the Approximations menu select the blue *interpolation* and red *filter* options. From the Options menu check *semiLogY* and uncheck *connect*. Use the slider to increase `N` and observe the rapid decrease in the error of the filtered approximation away from the discontinuity. The filter has restored spectral accuracy at points sufficiently far away from the discontinuity. Next, check *plot coefficients* on the Options menu and compare the filtered and unfiltered coefficients. Now, display the *parameters dialog* from the Options menu and enter 1 in the *filter order* box to change the order of the filter to 2. Repeat the above experiments. Observe how the sharp front at the discontinuity is rounded or smeared in the filtered approximation by the low order filter. Enter 4 in the *filter order* box to change the order of the filter to 8 and repeat. What do you observe?

In the CA applet, select the *absolute value* function from the Functions menu and repeat the previous applet activity.

The EF applet illustrates the strength of the damping applied in equation (26) to the coefficients `a`_{k} from `k` = 0, 1, ..., `N` for filters of order 2 to 32. The slider at the bottom of the applet can be used to change the order of the filter. Observe that as the order of the filter increases that less damping is applied to the coefficients with small `k`.

Chebyshev approximation is an old and rich subject. However, many areas that employ Chebyshev polynomials have open questions that have attracted the attention of current researchers. One example is pseudospectral methods for the numerical solution of partial differential equations (PDEs). Chebyshev pseudospectral methods, which are based on the interpolating Chebyshev approximation (12), are well established as powerful methods for the numerical solution of PDEs with sufficiently smooth solutions. Interpolation means that `f`, the function that is approximated, is a known function. The terms collocation and pseudospectral are applied to global polynomial interpolatory methods for solving differential equations for an unknown function `f`. Detailed information on pseudospectral methods may be found in the standard references: Boyd (2000), Canuto, et al. (1988), Funaro (1992), Gottlieb, et al. (1984), Gottlieb and Orszag (1977), and Trefethen (2000).

Many important PDEs have discontinuous (or nearly discontinuous) solutions. See the article Sarra (2003) for a discussion of one such class of PDEs, nonlinear hyperbolic conservation laws. In these cases, the Chebyshev pseudospectral method produces approximations that are contaminated with Gibbs oscillations and suffer from the corresponding loss of spectral accuracy, just like the Chebyshev interpolation methods that the pseudospectral methods are based on.

An active research area is the development of postprocessing methods to remove the Gibbs oscillations from PDE solutions and to restore spectral accuracy. Spectral filters may be used but they perform poorly in the neighborhood of discontinuities. More sophisticated methods that do better in the area of discontinuities, but they may need to know the exact location of the discontinuities. The methods include Spectral Mollification, Gegenbauer Reconstruction Gottlieb (1997), Padé Filtering, and Digital Total Variation Filtering.

Several postprocessing methods with applications are discussed in Sarra (2003) with supporting web material at the Matlab Postprocessing Toolbox. The ultimate goal is a "black box" postprocessing algorithm, which can be given an oscillatory PDE solution and return a postprocessed solution with spectral accuracy restored.

In addition to the exponential filter, other postprocessing methods for lessening the effects of the Gibbs phenomenon exist. Explore some of them which include:

- Other spectral filters. See Boyd (1996).
- Reprojection methods. Reprojection methods work by projecting the slowly converging Chebyshev approximation onto a Gibbs complementary basis in which the convergence is faster. See Gelb (2005), Gottlieb (1997), and Sarra (2003).
- Padé based reconstruction. Padé methods reconstruct the Chebyshev polynomial approximation as a rational approximation (Mace, 2003).
- Digital Total Variation (DTV) filtering. DTV methods which were developed in image processing have been used to postprocess Chebyshev approximations. See Sarra (2006).

Chebyshev approximation and its relation to polynomial interpolation at equidistant nodes has been discussed. We have illustrated how the Chebyshev methods approximate with spectral accuracy for sufficiently smooth functions and how less smoothness slows down convergence. We have illustrated how the presence of a discontinuity leads to lack of convergence at the discontinuity and leads to slowed convergence away from the discontinuity. We have described the Gibbs phenomenon which is characterized by a lack of or slow convergence as well as non-physical oscillations. Spectral filtering was discussed as a method used to lessen the effects of the Gibbs phenomenon and to restore spectral accuracy sufficiently far away from a discontinuity. Postprocessing methods to lessen the effects of the Gibbs oscillations are an active research area which would be an excellent topic for undergraduate research or as the topic of a Masters thesis.

- J. Berrut and L. N. Trefethen. Barycentric Lagrange interpolation. SIAM Review, 46(3): 501-517, 2004.
- John P. Boyd. The Erfc-Log Filter and the asymptotics of the Vandeven and Euler sequence accelerations. Houston Journal of Mathematics, 267--275, 1996.
- John P. Boyd. Chebyshev and Fourier Spectral Methods. Dover Publications, Inc, New York, second edition, 2000.
- W. Briggs and V. Henson. The DFT: An Owner's Manual for the Discrete Fourier Transform. SIAM, 1995.
- R. Burden and J. Faires. Numerical Analysis. Brooks Cole, eighth edition, 2005.
- Claudio Canuto, M. Y. Hussaini, Alfio Quarteroni, and Thomas A. Zang. Spectral Methods for Fluid Dynamics. Springer-Verlag, New York, 1988.
- D. Funaro. Polynomial Approximation of Differential Equations. Springer-Verlag, New York, 1992.
- A. Gelb and J. Tanner. Robust reprojection methods for the resolution of the Gibbs phenomenon. To appear in Applied and Computational Harmonic Analysis, 2005.
- David Gottlieb, M. Y. Hussaini, and Steven A. Orszag. Theory and application of spectral methods. In R. G. Voigt, D. Gottlieb, and M. Y. Hussaini, editors, Spectral Methods for Partial Differential Equations, 1--54. SIAM, Philadelphia, 1984.
- David Gottlieb and Steven A. Orszag. Numerical Analysis of Spectral Methods. SIAM, Philadelphia, PA, 1977.
- David Gottlieb and Chi-Wang Shu. On the Gibbs phenomenon and its resolution. SIAM Review, 39(4): 644--668, 1997.
- E. Hewitt and R.E. Hewitt. The Gibbs-Wilbraham phenomenon: an episode in Fourier analysis. History of Exact Science, 21: 129--160, 1979.
- R. Mace. Reduction of the Gibbs Phenomenon in Chebyshev approximations via Chebyshev-Padé filtering. Master's thesis, Marshall University, 2005.
- J. Mason and D. Handscomb. Chebyshev Polynomials. CRC, 2003.
- T. Rivlin. The Chebyshev Polynomials. Wiley, 1974.
- S. A. Sarra. The method of characteristics with applications to conservation laws. Journal of Online Mathematics and its Applications, 3, 2003. (accessed December 16, 2005).
- S. A. Sarra. The spectral signal processing suite. ACM Transactions on Mathematical Software, 29(2): 1--23, 2003.
- S. A. Sarra. Digital total variation filtering as postprocessing for Chebyshev pseudospectral methods for conservation laws, Numerical Algorithms, 41:17--33, 2006.
- L. N. Trefethen. Spectral Methods in Matlab. SIAM, Philadelphia, 2000. 9
- H. Vandeven. Family of spectral filters for discontinuous problems. SIAM Journal of Scientific Computing, 6:159--192, 1991.