You are here

Iterative Methods for Solving Ax = b - Analysis of Jacobi and Gauss-Seidel Methods

Author(s): 
David M. Strong
We continue our analysis with only the 2 x 2 case, since the Java applet to be used for the exercises deals only with this case. As we noted on the preceding page, the Jacobi and Gauss-Seidel Methods are both of the form

so for a general 2 x 2 matrix

their iteration matrices are

Method B
Jacobi
GS

Notice that for both methods the diagonal elements of A must be non-zero: a11 ≠ 0 and a22 ≠ 0.

It turns out that, if an n x n iteration matrix B has a full set of n distinct eigenvectors, then ||B|| = |λmax|, where λmax is the eigenvalue of B of greatest magnitude. The 2 x 2 Jacobi and Gauss-Seidel iteration matrices always have two distinct eigenvectors, so each method is guaranteed to converge if all of the eigenvalues of B corresponding to that method are of magnitude < 1. This includes cases in which B has complex eigenvalues. For n x n systems, things are more complicated. More general cases for larger systems are discussed in more detail in any good numerical analysis or numerical linear algebra text.

We have now answered the first question posed on the preceding page, at least for 2 x 2 systems:

When will each of these methods work? That is, under what conditions will they produce a sequence of approximations x(0), x(1), x(2), … that converges to the true solution x ?

Answer: When the eigenvalues of the corresponding iteration matrix are both less than 1 in magnitude.

Because || e(k) || ||B||k ||e0||, the second question is also answered.

When the methods do work, how quickly will the approximations approach the true solution? That is, what will the rate of convergence be?

Answer: The rate will be the same as the rate at which ||B||k converges to 0.

For example, if ||B|| = 0.5, then size of the error e(k) = xx(k) would be cut approximately in half by each additional iteration. That is, the rate of convergence would be 0.5.

The magnitude of ||B|| is directly related to the magnitude of the eigenvalues of B. Consequently, a major goal in designing an iterative method is that the corresponding iteration matrix B has eigenvalues that are as small (close to 0) as possible.

The eigenvalues and corresponding eigenvectors for the Jacobi and Gauss-Seidel Methods are shown in the following table.

Method Eigenvalues Eigenvectors
Jacobi
GS

Notice that, for both methods, ||B|| = ||λmax|| < 1 if |a12a21 / a11a22| < 1. Also notice that the magnitude of the non-zero eigenvalue for the Gauss-Seidel Method is the square of either of the two eigenvalues for the Jacobi Method. As a result, if BJacobi and BGS are the iteration matrices of the 2 x 2 Jacobi and Gauss-Seidel Methods, respectively, then ||BGS|| = ||BJacobi||2.

For example, if ||BJacobi|| = 0.5, then ||BGS|| = (0.5)2 = 0.25. That is, if each iteration of the Jacobi Method causes the error to be halved, then each iteration of the Gauss-Seidel Method will cause the error to be quartered. Another way to look at this is that approximately twice as many iterations of the Jacobi Method iterations are needed to achieve the same level of accuracy (in approximating the exact solution x) as for the Gauss-Seidel Method.

Everything on this page relates to the case of 2 x 2 systems. As mentioned, for general n x n systems, things are generally different and certainly more complicated than for the 2 x 2 case. In fact, Jacobi's Method might converge while the Gauss-Seidel Method does not, or vice versa, and it's possible that neither method converges. This is especially true if the original matrix A is not symmetric or positive definite. Fortunately, many matrices that arise in real life applications are both symmetric and positive definite.

David M. Strong, "Iterative Methods for Solving [i]Ax[/i] = [i]b[/i] - Analysis of Jacobi and Gauss-Seidel Methods," Convergence (July 2005)