A third iterative method, called the Successive Overrelaxation (SOR) Method, is a generalization of and improvement on the GaussSeidel Method. Here is the idea:
For any iterative method, in finding x^{(k+1)} from x^{(k)}, we move a certain amount in a particular direction from x^{(k)} to x^{(k+1)}. This direction is the vector x^{(k+1)} − x^{(k)}, since x^{(k+1)} = x^{(k)} + (x^{(k+1)} − x^{(k)}). If we assume that the direction from x^{(k)} to x^{(k+1)} is taking us closer, but not all the way, to the true solution x , then it would make sense to move in the same direction x^{(k+1)} − x^{(k)}, but farther along that direction.
Here is how we derive the SOR Method from the GaussSeidel Method. First, notice that we can write the GaussSeidel equation as
,
so that
We can subtract x^{(k)} from both sides to get
Now think of this as the GaussSeidel correction (x^{(k+1)} − x^{(k)})_{GS}. As suggested above, it turns out that convergence x^{(k)} x of the sequence of approximate solutions to the true solution is often faster if we go beyond the standard GaussSeidel correction. The idea of the SOR Method is to iterate
where, as we just found,
and where generally 1 < ω < 2. Notice that if ω = 1 then this is the GaussSeidel Method.
Written out in detail, the SOR Method is
We can multiply both sides by matrix D and divide both sides by ω to rewrite this as
then collect the x^{(k+1)} terms on the left hand side to get
.
When we solve for x^{(k+1)}, we get
Notice that the SOR Method is also of the form x = Bx + , so the general convergence analysis on page 6 also applies to the SOR Method, as does the more specific analysis on page 7 for the Jacobi and GaussSeidel Methods. The iteration matrix B that determines convergence of the SOR Method is
,
so optimal convergence is achieved by choosing a value of ω that minimizes
As we did earlier for the Jacobi and GaussSeidel Methods, we can find the eigenvalues and eigenvectors for the 2 x 2 SOR Method B matrix. However, because this is quite a bit more complicated, we do not derive these expressions here. Rather, we leave it as Exercise 18 (next page) for the ambitious student or the challenging instructor.
In practice, we typically use a computer to perform the iterations, so we need implementable algorithms in order to use these methods for n x n systems. We conclude by giving one possible set of algorithms for finding element x_{i}^{(k+1)} given x_{1}^{(k)}, x_{2}^{(k)}, … , x_{n}^{(k)}. Although these particular algorithms are not quite optimally efficient, writing the algorithms this way makes more obvious the slight but important differences among the three methods.
Method 
Algorithm for performing iteration k + 1:
for i = 1 to n do: 
Jacobi 

Gauss
Seidel 

SOR 
