One of the most important problems in mathematics is to find the values of the n
unknowns x1, x2, ..., xn
that satisfy the system of n
That is, we want to solve for x in Ax = b where
A is commonly referred to as the coefficient matrix. In order to save space, we usually write column vectors in coordinate form, x = (x1, x2, …, xn), and we will follow that practice in these tutorials and exercises.
Suppose we knew the true values of x2, x3, … , xn . Then we could use these values, along with the first equation (or any other equation where the x1 coefficient is nonzero), to find the true value of x1. We usually won’t have the true values of x2, x3, … , xn , but suppose we had pretty good approximations (or guesses) for their values. Then we could hope to use these values to find a pretty good approximation for x1. Similarly, we probably could find a pretty good approximation for any xi if we had good approximations for the values of x1, … , xi-1, xi+1, … , xn. The idea emerging here is that, if we currently have approximations for the values of x = (x1, x2, …, xn), then we can use these current values to find new (and hopefully better) approximations for x = (x1, x2, …, xn).
The fundamental idea of an iterative method is to use xcurrent, a current approximation (or guess) for the true solution x of Ax = b, to find a new approximation xnew, where xnew is closer to x than xcurrent is. We then use this new approximation as the current approximation to find yet another, better approximation.
Iterate means repeat, so an iterative method repeats this process over and over, each time using the current approximation to produce a better approximation for the true solution, until the current approximation is sufficiently close to the true solution -- or until you realize that the sequence of approximations resulting from these iterations is not converging to the true solution.
So given an initial guess or approximation x(0) for the true solution x, we use x(0) to find (in a manner we’ll discuss shortly) a new approximation x(1), then we use x(1) to find another, better approximation x(2), and so on. In general, we use x(k) to find a new approximation x(k+1). We expect that x(k) x as k ; that is, our approximations should become closer to the true solution as we take more iterations of this process. Of course, we hope that our current approximation x(k) is close enough to x for our needs after a relatively small number of iterations.
Since we don’t actually have the true solution x (if we did, why would we be trying to find it?), we can’t check to see how close our current approximation x(k) is to x. One common way to check the closeness of x(k) to x is instead by checking how close Ax(k) is to Ax -- that is, how close Ax(k) is to b.
Another way to check the accuracy of our current approximation is by looking at the differences in successive approximations, x(k) − x(k-1). We generally expect x(k) to be close to x if x(k) − x(k-1) is small.
This idea of iteration is not unique to linear algebra. An example that may be familiar to you is Newton’s Method , which finds approximations to the root x of f(x) = 0. Given a current approximation x(k) to x, Newton’s Method finds x(k+1) by
Under the right conditions, x(k) x [that is, f (xk) 0] as k . Interestingly, the underlying theory of Newton’s Method is actually found in certain iterative methods that solve our problem Ax = b. Those methods are discussed in numerical linear algebra courses.