We continue our analysis with only the
2 x 2 case, since the
Java applet to be used for the exercises deals only with this case. As we noted on the preceding page, the Jacobi and GaussSeidel Methods are both of the form
so for a general 2 x 2 matrix
their iteration matrices are
Method 
B 
Jacobi


GS


Notice that for both methods the diagonal elements of A must be nonzero: a_{11} ≠ 0 and a_{22} ≠ 0.
It turns out that, if an n x n iteration matrix B has a full set of n distinct eigenvectors, then B = λ_{max}, where λ_{max} is the eigenvalue of B of greatest magnitude. The 2 x 2 Jacobi and GaussSeidel iteration matrices always have two distinct eigenvectors, so each method is guaranteed to converge if all of the eigenvalues of B corresponding to that method are of magnitude < 1. This includes cases in which B has complex eigenvalues. For n x n systems, things are more complicated. More general cases for larger systems are discussed in more detail in any good numerical analysis or numerical linear algebra text.
We have now answered the first question posed on the preceding page, at least for 2 x 2 systems:
When will each of these methods work? That is, under what conditions will they produce a sequence of approximations x^{(0)}, x^{(1)}, x^{(2)}, … that converges to the true solution x ?
Answer: When the eigenvalues of the corresponding iteration matrix are both less than 1 in magnitude.
Because  e^{(k)}  ≤ B^{k} e_{0}, the second question is also answered.
When the methods do work, how quickly will the approximations approach the true solution? That is, what will the rate of convergence be?
Answer: The rate will be the same as the rate at which B^{k} converges to 0.
For example, if B = 0.5, then size of the error e^{(k)} = x − x^{(k)} would be cut approximately in half by each additional iteration. That is, the rate of convergence would be 0.5.
The magnitude of B is directly related to the magnitude of the eigenvalues of B. Consequently, a major goal in designing an iterative method is that the corresponding iteration matrix B has eigenvalues that are as small (close to 0) as possible.
The eigenvalues and corresponding eigenvectors for the Jacobi and GaussSeidel Methods are shown in the following table.
Method 
Eigenvalues 
Eigenvectors 
Jacobi 


GS



Notice that, for both methods, B = λ_{max} < 1 if a_{12}a_{21} / a_{11}a_{22} < 1. Also notice that the magnitude of the nonzero eigenvalue for the GaussSeidel Method is the square of either of the two eigenvalues for the Jacobi Method. As a result, if B_{Jacobi} and B_{GS} are the iteration matrices of the 2 x 2 Jacobi and GaussSeidel Methods, respectively, then B_{GS} = B_{Jacobi}^{2}.
For example, if B_{Jacobi} = 0.5, then B_{GS} = (0.5)^{2} = 0.25. That is, if each iteration of the Jacobi Method causes the error to be halved, then each iteration of the GaussSeidel Method will cause the error to be quartered. Another way to look at this is that approximately twice as many iterations of the Jacobi Method iterations are needed to achieve the same level of accuracy (in approximating the exact solution x) as for the GaussSeidel Method.
Everything on this page relates to the case of 2 x 2 systems. As mentioned, for general n x n systems, things are generally different and certainly more complicated than for the 2 x 2 case. In fact, Jacobi's Method might converge while the GaussSeidel Method does not, or vice versa, and it's possible that neither method converges. This is especially true if the original matrix A is not symmetric or positive definite. Fortunately, many matrices that arise in real life applications are both symmetric and positive definite.