Theon's method naturally lends itself to an iterative matrix multiplication model, an example of a difference equation. This model allows us to see how eigenvalues and eigenvectors are useful in an application taken from the history of mathematics. If we let \({\vec v_n} = \left[ \begin{array}{r} x_n \\ y_n\end{array} \right] \) and \(A = \left[ \begin{array}{rr} 1 & 1 \\ 2 & 1\end{array} \right] ,\) then \({\vec v_{n+1}} = A{\vec v_n}.\) Since \({\vec v_1} = A{\vec v_0},\) then \({\vec v_2} = A{\vec v_1} = A A {\vec v_0} = A^2\,{\vec v_0}.\) In general, it turns out that \({\vec v_n} = A^n\,{\vec v_0}.\) Try this out starting with \(x_0=y_0=1\) to verify that you get the same values as in Table 1, repeated here for convenience.

\[\begin{array}{r|r|r|r} n & x_n & y_n & y_n/x_n \\\hline 0 & 1 & 1 & 1 \\\hline 1 & 2 & 3 & 1.5 \\\hline 2 & 5 & 7 & 1.4 \\\hline 3 & 12 & 17 & 1.4167 \\\hline 4 & 29 & 41 & 1.4138\\\hline 5 & 70 & 99 & 1.4143\\\hline 6 & 169 & 239 & 1.4142\end{array}\]

**Table 1** (repeated)**.** Side and diagonal numbers

This linear algebra viewpoint gives us another way to argue that this method does work to approximate \(\sqrt{2}.\) To determine the long-range behavior of a system like this, we look at the "eigenstuff": the eigenvalues and eigenvectors of the matrix \(A.\) Solving for the eigenvalues, we get that \(\lambda_1 = 1+\sqrt{2} \approx 2.414,\) and \(\lambda_2 = 1-\sqrt{2} \approx -0.414.\) Notice that \(|\lambda_1| >1\) and \(|\lambda_2|<1.\) As eigenvectors we can choose \({\vec w_1} = \left[ \begin{array}{r} 1 \\\sqrt{2}\end{array} \right] \) and \({\vec w_2} = \left[ \begin{array}{r} 1 \\ -\sqrt{2}\end{array} \right] .\) These eigenvectors are, of course, linearly independent, and so any initial vector \({\vec v_0}\) can be written as a linear combination of the eigenvectors: \({\vec v_0} = c_1{\vec w_1}+c_2{\vec w_2}.\) Then when we iterate we get that \[{\vec v_n} = A^n\,{\vec v_0} = c_1A^n\,{\vec w_1}+c_2A^n\,{\vec w_2} = c_1{\lambda_1}^n\,{\vec w_1}+c_2{\lambda_2}^n\,{\vec w_2}.\] Since \({{\lambda_2}^{n}}\) goes to \(0\) as \(n\) grows, any initial vector will tend to a multiple of \({\vec w_1} = \left[ \begin{array}{r} 1 \\\sqrt{2}\end{array} \right].\) Thus the second component, \(y,\) will, in the limit, be \(\sqrt{2}\) times the first component or \(x\) value. This shows us that \(\displaystyle{y_n \over x_n}\) tends to \(\sqrt 2\) as \(n\) goes to infinity.

Figure 5 shows the resulting vectors (in blue) when starting with \(v_0=\left[ \begin{array}{r} 1 \\1\end{array} \right] .\) The black dotted line is in the direction of vector \({\vec w_1} = \left[\begin{array}{r} 1 \\\sqrt{2}\end{array} \right].\) Clearly the vectors are converging to the direction of that first eigenvector. Note also that the vectors alternate being above and below that direction, much like the alternating \(\pm 1\) when considering the results algebraically.

**Figure 5.** Iterates approach an eigenvector.

You can try out this visualization in the GeoGebra applet below. Use the slider in the applet to see (again) how quickly the vectors \(\vec v_n\) converge to the vector \({\vec w_1}.\) Be sure to change the initial vector \(\vec v_0\) to see if you can slow the convergence.

The classroom activity Approximating Square Roots with Linear Algebra is available here. This applet helps students explore the convergence. A stand-alone version of the applet is available here.

Next, we generalize this vector method to approximate the square root of any positive integer.