 # Iterative Methods for Solving Ax = b - Exercises, Part 2: All Methods

Author(s):
David M. Strong
1. Suppose that v1 and v2 are linearly independent eigenvectors of iteration matrix B, with corresponding eigenvalues λ1 and λ2, and suppose that the initial error can be written as linear combination of these two vectors: e(0) = c1v1 + c2v2. Recall that the error after iteration k is e(k) = Be(k-1), so that 1. Prove (by induction) that 2. Show that if |λ1| < 1 and |λ2| < 1, then ||e(k)|| 0 as k  . Use the fact that .
This shows that the initial error, and thus the initial guess, don't matter that much. What matters most is the eigenvalues of B, which tell you both whether the error will go to 0, and if so, how quickly.  More generally, what matters is the size of ||B ||, which is directly related to the eigenvalues.

2. In this problem, you will see that the order of the equations in a system can affect the performance of an iterative solution method.

1. For the system of equations use the applet to compute five iterations with both the Jacobi and Gauss-Seidel Methods, using initial value x(0) = (-0.5, 0.5), and record your results. For each method, complete a table like the one below, which shows the first set of entries for Jacobi's Method. To better see the results, click the Default Axes button and then Zoom out a few times.

k   x(k)   ||error(k)||
1   0.000000     3.000000     3.162278
2
3
4
5

Also, find the eigenvalues of the iteration matrix B for each method, and use this information to explain why the approximations do not converge for either method.

2. Now rearrange (swap) the two equations, so that the system has the form ,

and use the applet to compute five iterations, again making a table for each method. This time the approximations produced by the iterations should converge. Find the eigenvalues of the new iteration matrix B for each method, and use this information to explain why the approximations converge in this case. To better see the results, click the Default Axes button and then Zoom in a few times.

3. In this problem you will investigate how whether a matrix is strictly diagonally dominant affects whether the Jacobi and/or Gauss-Seidel Methods work.

1. Show that if a 2 x 2 coefficient matrix is strictly diagonally dominant, then the eigenvalues of the iteration matrices B corresponding to the Jacobi and Gauss-Seidel Methods are of magnitude < 1 (which guarantees convergence, as you found in Exercise 7).

2. In Exercise 8 you have two coefficient A matrices, one for each ordering of the equations.  Comment on how the diagonal dominance (or not) of each A matrix relates to the convergence (or not) results you found in Exercise 8.

3. Give a 2 x 2 system of equations in which the coefficient matrix A is strictly diagonally dominant, and compute the first five iterations to help verify that both methods produce approximations that are converging to the exact solution.  Make a table similar to the one in Exercise 8.  It doesn’t matter what you choose for  b  or for the initial guess x(0).

4. Consider Jacobi’s Method for solving the system Ax = b, where 1. Verify that the iteration matrix B corresponding to the Jacobi Method for this system is ,

and verify that the eigenvalues for this matrix are λ1 = 0.5 and λ1 = − 0.5 with corresponding eigenvectors v1 = (1, − 1) and v2 = (1, 1).  (All you need to show is that Bv1 = λ1v1 and Bv2 = λ2v2.)

2. Suppose that b = (0, 0), so that the true solution is x = (0, 0). If we choose x(0) = (5, − 1), then  e(0) = xx(0) = (− 5, 1).  Verify that e(0) = c1v1 + c2v2, where c1 = − 3 and c1 = − 2, and where v1 = (1, − 1) and v2 = (1, 1) are the eigenvectors of B.

3. Use the fact that e(k) = c1λ1k v1 + c2λ2k v2 to show that .

Compute the errors e(1), e(2), e(3), e(4), and e(5) in five iterations of the Jacobi Method starting from  x(0) = (5, − 1), as in (b). Check your work by using the applet to find the first five iterations when using Jacobi's Method for this system and initial guess, and sketch or print the applet's graph of the five approximations.

4. Now suppose that b = (3, 0) so that the true solution is x = (2, − 1). Suppose also that x(0) = (7, − 2), so that we happen to have the same initial error, e(0) = xx(0) = (− 5, 1), as in (b) and (c). Use the applet to find the first five iterations and corresponding errors e(1), e(2), e(3), e(4), and e(5) for this new system.  (The coefficient matrix A is the same, and thus the iteration matrix B is also the same -- only the right-hand side b is changed.)  Again sketch or print the graph that shows the approximations. Compare your results here to your results in (c), both the actual errors e(1), e(2), e(3), e(4), and e(5) and the graph of the results.

5. For this same coefficient matrix A, suppose that any b and x(0) are chosen so that the initial error is e(0) = xx(0) = (− 5, 1), as in (b) − (d). What would you predict that the errors e(1), e(2), e(3), e(4) , and e(5) would be?

6. Support your answer in (e) by finding any b and x(0) such that e(0) = (− 5, 1) -- you choose b and x(0) from the infinite number of possibilities -- and then use the applet to compute the errors e(1), e(2), e(3), e(4) , and e(5) . Hint: First choose x -- which will give you b, since Ax = b -- and then choose x(0) so that e(0) = xx(0), that is, x(0) = x − (− 5, 1). Sketch or print the applet's graph of the five approximations, and compare this to your graphs from (c) and (d).

5. Do Exercise 10 with the Gauss-Seidel Method instead of the Jacobi Method. You will need to find B and its eigenvalues and eigenvectors, as well as c1 and c2.

6. Each of the following systems has the same solution. Consider solving each using Jacobi’s Method. Since the coefficient matrices A are different, the iteration matrices B in each case are different, and thus the convergence properties of Jacobi's Method in each case are different.

1. Using x(0) = (4, 1) for each system, use the applet to do five iterations. (You'll probably need to Zoom out to see the results for the third system.) For all three systems, record the ||error|| after each iteration, and describe what the graph in the applet is showing. (You may sketch or print the applet's graph and comment on what you see.)

2. For each of the three systems, find the iteration matrix B and its eigenvalues, and use the eigenvalues to explain the results in (a). Note: the "circular" behavior of the Jacobi Method for the second system occurs because the eigenvalues of B for this system are complex.

7. Do Exercise 12 with the Gauss-Seidel Method instead of Jacobi’s Method.

8. The rate of convergence corresponds to how the error is reduced by performing another iteration. For this problem we will think of the rate of convergence as how much the error shrinks with each iteration -- in other words, it is the ratio of the new error to the old error: ||e(k)|| / ||e(k-1)||. So, for example, if r = ||e(k)|| / ||e(k-1)|| = 0.7, then ||e(k)|| = 0.7||e(k-1)||. Obviously we would always want r < 1 in order for the error to → 0.

We will consider the convergence rate of Jacobi’s Method for solving Recall that ||e(k)|| = ||Be(k-1)|| ≈ |λmax| ||e(k-1)||, where λmax is the largest (in magnitude) eigenvalue of iteration matrix B. In general (for larger systems), this approximate relationship often is not the case for the first few iterations, but the approximation often becomes more valid for later iterations. It turns out that for 2 x 2 systems, if both eigenvalues are equal in magnitude, |λ1| = |λ2|, then ||e(k)|| = |λmax| ||e(k-1)|| exactly and immediately, which also means that the rate of convergence at every step would be ||e(k)|| / ||e(k-1)|| = |λmax|. For ,
if a11 = a22 and a12 = a21, it turns out that the two eigenvalues for the iterations matrix B for Jacobi's Method will be of the same size, as can be seen on page 7, Analysis of Jacobi and GS Methods.

1. Find the two eigenvalues of the iteration matrix B corresponding to Jacobi’s Method for the 2 x 2 system described above.

2. Verify the relationship ||e(k)|| / ||e(k-1)|| = |λmax| by completing a table like the one below (use the applet to do the work) and then comparing these results to the eigenvalues found in (a). Use x(0) = (0, 0).
k ||e(k)|| ||e(k)|| / ||e(k-1)||
0 1.414214
1 0.942809 0.666666
2
3
4

In general, what would ||e(k)|| / ||e(k-1)|| be for What conditions on a and b would guarantee convergence (that is, what would guarantee that ||e(k)|| / ||e(k-1)|| < 1)? How does this relate to Exercise 8(a)?

9. Do Exercise 14 with the Gauss-Seidel Method instead of Jacobi’s Method.

10. In this problem you will use the SOR Method to solve the 2 x 2 system 1. Generally we choose a value for ω such that 1 < ω < 2. For each value of ω = 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, and 1.9, solve the system with x(0) = (0, 0) as the initial guess, and record the ||error|| after k = 10 iterations. Use the applet to perform the iterations. (Be sure to click the Initial guess button after entering each new value of ω.) You do not need to write down the results at each step this time we are interested only in the final result for each value of ω — complete a table like the one below. Based on your results, which value of ω seems to result in the fastest convergence? That is, which one gives the smallest error after 10 iterations? Note: this is not necessarily the optimal value of ω — it is simply the best choice among the nine values that we considered.

ω

||error(10)||
1.1 0.000664
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9

1. Repeat (a), except use x(0) = (−20, 10) as your initial guess.

2. Explain why the best value of ω is the same in both (a) and (b).

3. For each initial guess, x(0) = (0, 0) and x(0) = (−20, 10), solve the system using the Gauss-Seidel Method (which is equivalent to the SOR Method for ω = 1) and for both cases record the ||error|| after 10 iterations. For which values of ω in part (a) does the SOR Method produce faster convergence results (that is, a smaller ||error|| after 10 iterations) than the Gauss-Seidel Method and for which values is the convergence worse?

4. For this system, it turns out that for ω ≥ 2, the SOR Method will actually diverge. Again using the applet, do 50 iterations of the SOR Method using ω = 2 and ω = 2.05 and describe (or sketch/print and comment on) the results.

11. In this problem you will continue exploring the SOR Method for solving the 2 x 2 system 1. Find the B matrix for the SOR Method for this system (as a function of ω).

2. The characteristic equation for this B matrix is One could use the quadratic formula to find the two eigenvalues (as functions of ω) that satisfy det(Bλ I) = 0. It turns out, if the two eigenvalues are the same, then ||B|| = |λmax| is minimized, which results in the fastest possible convergence for the SOR Method. The two eigenvalues will be the same when the discriminant b2 − 4ac = 0 in the characteristic equation. Ignoring the factor of 1/16 in front, we want to examine the discriminant of  aλ2 + bλ + c, where a = 1, b = −(9ω2 − 32ω + 32), and c = 256(ω − 1)2 . In particular, we want to find where b2 = 4ac, that is, .

We can simplify this last equation to 9ω2 − 64ω + 64 = 0. Find the value of ω that results in the two eigenvalues being the same by solving for ω in 9ω2 − 64ω + 64 = 0, keeping in mind that we want the value that satisfies 1 < ω < 2.

3. If Exercise 16 was assigned, compare the value you just found in 17(b) to the result you found in Exercise 16 for an optimal value of ω [of the nine ω values in 16(a)].

12. Warning: this problem is for the determined, patient, and strong-hearted student. Consider the SOR Method for solving the general 2 x 2 system 1. Find the B matrix and its eigenvalues (as functions of a11, a12, a21, and a22 ).

2. Find the value of ω that results in the two eigenvalues being the same. (This is the value of ω that minimizes ||B|| = |λmax|.) This ω is a function of a11, a12, a21, and a22. What is the single eigenvalue (with algebraic multiplicity 2) in this case?

3. For the ω found in (b), what conditions on a11, a12, a21, and a22 guarantee that ||B|| = |λmax| < 1?

David M. Strong, "Iterative Methods for Solving [i]Ax[/i] = [i]b[/i] - Exercises, Part 2: All Methods," Convergence (July 2005)

## Dummy View - NOT TO BE DELETED

• • • • • 