One of the most important problems in mathematics is to find the values of the

*n* unknowns

*x*_{1}, *x*_{2}, ..., *x*_{n} that satisfy the system of

*n* equations

That is, we want to solve for *x* in *A**x* = *b* where

*A* is commonly referred to as the *coefficient* matrix. In order to save space, we usually write column vectors in coordinate form, *x* = (*x*_{1}, *x*_{2}, …, *x*_{n}), and we will follow that practice in these tutorials and exercises.

Suppose we knew the true values of *x*_{2}, *x*_{3}, … , *x*_{n} . Then we could use these values, along with the first equation (or any other equation where the *x*_{1} coefficient is nonzero), to find the true value of *x*_{1}. We usually won’t have the true values of *x*_{2}, *x*_{3}, … , *x*_{n} , but suppose we had pretty good approximations (or guesses) for their values. Then we could hope to use these values to find a pretty good approximation for *x*_{1}. Similarly, we probably could find a pretty good approximation for any *x*_{i} if we had good approximations for the values of *x*_{1}, … , *x*_{i-1}, *x*_{i+1}, … , *x*_{n}. The idea emerging here is that, if we currently have approximations for the values of *x* = (*x*_{1}, *x*_{2}, …, *x*_{n}), then we can use these current values to find new (and hopefully better) approximations for *x* = (*x*_{1}, *x*_{2}, …, *x*_{n}).

The fundamental idea of an iterative method is to use **x**^{current}, a current approximation (or guess) for the true solution **x** of *A**x* = *b*, to find a new approximation **x**^{new}, where **x**^{new} is closer to **x** than **x**^{current} is. We then use this new approximation as the current approximation to find yet another, better approximation.

*Iterate* means *repeat*, so an iterative method repeats this process over and over, each time using the current approximation to produce a better approximation for the true solution, until the current approximation is sufficiently close to the true solution -- or until you realize that the sequence of approximations resulting from these iterations is not converging to the true solution.

So given an initial guess or approximation **x**^{(0)} for the true solution **x**, we use **x**^{(0)} to find (in a manner we’ll discuss shortly) a new approximation **x**^{(1)}, then we use **x**^{(1)} to find another, better approximation **x**^{(2)}, and so on. In general, we use **x**^{(k)} to find a new approximation **x**^{(k+1)}. We expect that **x**^{(k)} **x** as *k* ; that is, our approximations should become closer to the true solution as we take more iterations of this process. Of course, we hope that our current approximation **x**^{(k)} is close enough to **x** for our needs after a relatively small number of iterations.

Since we don’t actually have the true solution **x** (if we did, why would we be trying to find it?), we can’t check to see how close our current approximation **x**^{(k)} is to **x**. One common way to check the closeness of **x**^{(k)} to **x** is instead by checking how close *A***x**^{(k)} is to *A***x** --* * that is, how close *A***x**^{(k)} is to **b**.

Another way to check the accuracy of our current approximation is by looking at the differences in successive approximations, **x**^{(k)} − **x**^{(k-1)}. We generally expect **x**^{(k)} to be close to **x** if **x**^{(k)} − **x**^{(k-1)} is small.

This idea of iteration is not unique to linear algebra. An example that may be familiar to you is Newton’s Method , which finds approximations to the root *x* of *f*(*x*) = 0. Given a current approximation *x*^{(k)} to x, Newton’s Method finds *x*^{(k+1)} by

Under the right conditions, *x*^{(k)} *x* [that is, *f* (*x*^{k}) 0] as *k* . Interestingly, the underlying theory of Newton’s Method is actually found in certain iterative methods that solve our problem *A**x* = *b*. Those methods are discussed in numerical linear algebra courses.