You are here

Iterative Methods for Solving Ax = b - Convergence Analysis of Iterative Methods

Author(s): 
David M. Strong
On this page and the next, we attempt to answer two questions regarding the Jacobi and Gauss-Seidel Methods :
  1. When will each of these methods work? That is, under what conditions will they produce a sequence of approximations x(0), x(1), x(2), … that converges to the true solution x?

  2. When they do work, how quickly will the approximations approach the true solution? That is, what will the rate of convergence be?
In general, an iterative method that finds the solution to Ax = b takes the form

so that

which we can rewrite as

where

 

B is often referred to as the iteration matrix.

If the current approximation x(k) is, in fact, the exact solution x, then the iterative method should certainly produce a next iteration x(k+1) that is also the exact solution. That is, it should be true that x = Bx + , so

 

Of course, since the problem we are trying to solve is Ax = b, M and N must be chosen so that A = MN.

On the other hand, choosing A = MN does not necessarily guarantee that the iterative method will find a sequence of vectors x(0), x(1), x(2), … that converges to the true solution x . Whether a particular method will work depends on the iteration matrix B = M-1 N. In fact, in general, B completely determines the convergence (or not) of an iterative method. In particular, the initial guess generally has no effect on whether a particular method is convergent or on the rate of convergence. However, if the initial guess is far away from the true solution, more iterations will be required to get an acceptable approximation for the true solution than if the initial guess were closer to the true solution.

To understand the convergence properties of an iterative method

 

we subtract it from the equation

 

which gives us

 

That is, since the current error is e(k) = xx(k),

To be clear, the superscript of B is the power of B, while the superscript of vector e (inside parentheses) is the iteration number to which this particular error corresponds.

Full appreciation of the significance of the relationship e(k) = Bke(0) requires some familiarity with matrix norms . Just like the norm of a vector, the norm of a matrix ||B|| tells us the “size” of the matrix (not its dimensions). More precisely, for any given vector v, the norm of the matrix tells us how much bigger or smaller (in norm)  Bv will be when compared to v. In particular, it is always true that ||Bv|| ||B|| ||v||.

Consequently, since ||e(k)|| = ||Bk e(0)|| ||B||k || e(0)||, then ||e(k)|| 0 (which is the same as e(k) 0) if ||B|| < 1. As we show on the next page, there is a particular iteration matrix B for each of the Jacobi and Gauss-Seidel Methods. For each method, the smaller ||B|| is, the faster the error will converge to 0 -- that is, the faster the approximation will approach the true solution. On the other hand, if ||B|| 1, the error will simply increase, and our approximations will move away from the true solution rather than toward it.

We end this section by noting that one condition sometimes encountered in practice that will guarantee that ||B|| < 1 is that matrix A is strictly diagonally dominant. This means that, for each row of A, the absolute value of the diagonal element is larger than the sum of the absolute values of the off-diagonal elements.

David M. Strong, "Iterative Methods for Solving [i]Ax[/i] = [i]b[/i] - Convergence Analysis of Iterative Methods," Convergence (July 2005)