Introduction

Many of the articles and books employ matrix decomposition, in particular use eigendecomposition for the proofs of convergence of iterative methods like Jacobi method. It is not clear or immediately obvious how and why those methods work and what is the connection. So, I wanted to start from fundamentals and introduce these tools progressively so we can better understand these techniques and apply to novel problems.

There are various reasons that are outside the scope of this article to discuss why we want to solve equations using iterative methods.

I first use a single variable case as a warmup and introudce the underlying idea. Then expand the techniques to multiple variables case.

First let's consider a single variable case

Given $3x = 6$, find $x$. We start with some random initial value, say $x_0$ for $x$ and improve it iteratively until the difference between two successive values is below a threshold. So, we need an equation that gives new value for $x$ from an old value. Let's try this:

Split $3$ as $M+B$

$(M+B)x = 6$

$Mx = -Bx + 6$

This splitting allows us to start with a random $x_0$, compute $-Bx_0+6$ which is $Mx_1$, extract $x_1 = \frac{-Bx_0 + 6}{M}$ from it and hopefully, the extracted $x_1$ is better than initial random selection. Now we can continue this and perform the same step with the new value of x again:

$$Mx_2 = -Bx_1+6 = -B( \frac{(-Bx_0+6)}{M} ) + 6$$

So, the pattern seems to be:

x  = -B (-Bx/M +6/M) + 6
   = B^2 x/M - 6B/M + 6

Let's seprately group the coefficients:

Start with $x = x_0$

$Mx_1 = -Bx_0 + 6$

or

$x_1 = (\frac{-B}{M}) x_0 + (\frac{6}{M})$

Let

    -B/M = P
     6/M = z

Continuing,

$\begin{aligned} x_{i+1} &= P * x_{i} + z \\ &= P * [ P * x_{i-1} + z ] + z \\ &= P^2 x_{i-1} + P z + z \end{aligned}$

Error Analysis:

Let the error in any step be distance from true value (which is not known and represented by $x$ in below equations):

$e_0 = x_0 - x$

$e_1 = x_1 - x$

...

$e_i = x_i - x$

Or

$x_i = e_i + x$

And so on. Note that this is analytical purpose only and error is not used to drive the algorithm in calculating approximation (error depends on the true value which we do not know - so can't use it to drive/stop algorithm!).

$\begin{aligned} x_{i+1} &= P * x_{i} + z &= P (x + e_{i}) + z &= P x + z + P e_{i} &= x + P e_{i} \end{aligned}$

This is because Px + z = x (note that here, $x$ is the actual value, so there is no confusion about $x$ appearing on LHS and RHS).

So,

$\begin{aligned} e_{i+1} = x_{i+1} - x = P e_i + x - x = P e_{i} \end{aligned}$

So, error starts decreasing if P < 1 and positive. Even if P < 1, if it is negative, convergence is questionable.

If we choose values as below, it satisfies the need:

   B = -1
   M = 4
  (verify that B+M = 3)

   P = 1/4
   z = 6/4

Convergence of solution:

How about convergence of $x_i$ itself?

$\begin{aligned} x_{i+1} - x &= P * e_i \\ &= P^{i+1} * e_0 \\ &= P^{i+1} * (x_0 - x) \end{aligned}$

We see that the difference between actual value and the iteration value of x decreases as we contine the iteration.

Let's test this some code:

import matplotlib.pyplot as plt
import numpy as np

%matplotlib inline

B = -1
M = 4

P = 0.25
z = 1.5

x0 = 1.1  # random starting point

# Note that x (the actual value) is used only for error analysis
x = 2

e0 = x0-x
el = [e0]
ec = e0

xl = [x0]
xc = x0

for i in range(10):
    en = P * ec
    el.append(en)
    ec = en
    xn = P * xc + z
    xl.append(xn)
    xc = xn

fig = plt.figure(figsize=(10, 7))

err_ax = fig.add_subplot(121)
sol_ax = fig.add_subplot(122)


err_ax.set_title('Error')
err_ax.plot(np.arange(len(el)), el)

sol_ax.set_title('Solution')
sol_ax.plot(np.arange(len(xl)), xl)
[<matplotlib.lines.Line2D at 0x142265d00>]

We clearly see that error is going to zero and $x_i$ is converging to 2.0.

Now let's see how we can employ some of the same concepts to multiple variables case.

Extrapolate above to multiple variable (matrix) case

2x+y = 7

x+3y = 8

$\begin{aligned} \begin{bmatrix} 2 && 1 \\ 1 && 3 \\ \end{bmatrix} * \begin{bmatrix} x \\ y \\ \end{bmatrix} &= \begin{bmatrix} 7 \\ 8 \\ \end{bmatrix} \end{aligned}$

A x = b

A is square and in general may be or may not be invertible.

Let's use the split trick we employed earlier for single variable case.

Split A = M + B

(M+B)x = b

Mx = -Bx + b

Let's use an M such that it is invertible (and easily invertible)

$x = M^{-1}(-Bx+b)$

Start with a random or zero vector for initial value, $x = x_0 = \begin{bmatrix}0 \\ 0\end{bmatrix}$

$\begin{aligned} x_1 &= M^{-1}(-Bx_0+b) \\ &= -M^{-1} B x_0 + M^{-1} b \end{aligned}$

$\begin{aligned} x_2 &= M^{-1} (-Bx_1+b) \\ &= -M^{-1} B x_1 + M^{-1} b \\ &= -M.T B ( -M.T B x0 + M^{-1} b ) + M^{-1} b \\ &= M^{-1} B M^{-1} B x0 - M^{-1} B M^{-1} b + M^{-1} b \end{aligned}$

The -ve term is concerning, it is not clear if the convergence happens and even if so it is tough to analyze the convergence.

Let's try something different, may be we could do subtraction instead of addition. Let's split like this:

A = M - N

Mx - Nx = b

$x = M^{-1} N x + M^{-1} b$

$x_1 = M^{-1} N x_0 + M^{-1} b$

Let $M^{-1} N = P$ and $M^{-1} b = z$ for ease.

$x_1 = P x_0 + z$

$\begin{aligned} x_2 = P (x_1) + z = P (P x_0 + z) + z = P^2 x_0 + P z + z \end{aligned}$

Note: BTW, this method of splitting is known as Jacobi method.

Error analysis

Let's do an error analysis similar to how we did for single variable case.

$e_0 = x_0-x$

or

$e_i = x_i - x$

$\begin{aligned} x_{i+1} &= P x_i + z &= P (e_i + x) + z &= P e_i + P x + z &= P e_i + x \end{aligned}$

$\begin{aligned} e_{i+1} &= x_{i+1} - x &= P e_i &= P^{i+1} e_0 \end{aligned}$

So, we want $P^k -> 0$ in the limit for error to converge to zero.

But, $P^k$ -> 0 iff $||P^k||$ -> 0, iff $\rho(P)$ < 1. Here $\rho$ is the spectral radius.

Now, how do we find M and N?

Since we want M to be easily invertible, it would be great if it is a diagonal matrix. May be, we could just take diagonal elements of A and shove in M. Remaining ones go into -N. Then A = M - N.

Then $P = M^{-1} N = M^{-1} (M-A) = I - M^{-1} A $

Or $M^{-1} A = I - P $.

From linear algebra,

$ ||e_{i+1}|| = || P e_i || \le ||P||_{p} ||e_i||$ for any norm p. If $||P||_{p} \lt 1$, the error keeps decreasing. If A is diagonally dominated, then the ‘iteration matrix’ P has an $\infty$-norm that is strictly less than 1. Note that for some norms, p, $||P||_{p}$ might be less than 1 and may be greater than 1 for some other norms. So, this is only sufficient condition. The necessary condition is that $\rho(P) \lt 1$.

Let's try on the above example through code.

import numpy as np
from numpy import linalg as LA

# works only for diagonally dominant A (equivalent to convergence of P)
# and with spectral radius of P < 1
# x is used for error analysis only

def solve(A, b, x, num_iter = 60):
    
    variable_count = len(x)
    x0 = np.zeros((variable_count, 1))
    
    e0 = x0-x
    e = [e0]

    ec = e0
    xl = [x0]
    xc = x0

    M = np.diag(np.diag(A))

    N = M - A
    P = LA.inv(M).dot(N)
    z = LA.inv(M).dot(b)

    eig = LA.eig(P)
    print("Eigen stuff of P")
    print(eig)

    print("Initial")
    print("M inverse: ")
    print(LA.inv(M))
    print("P (inv(M)*N): ")
    print(P)
    print ("z (inv(M) b)")
    print(z)

    for i in range(num_iter):
        en = P.dot(ec)
        e.append(en)
        ec = en
        xn = P.dot(xc) + z
        xl.append(xn)
        xc = xn
        norm = LA.norm(en, ord=2)
        if (norm < 1e-5):
            break
    return (e, xl)
import numpy as np
from numpy import linalg as LA


A = np.array([[2, 1], [1, 3]])
b = np.array([[7], [8]])

x = np.array([[1], [1]])

el, xl = solve(A, b, x)

print("Result after last iteration")
print(xl[-1])
Eigen stuff of P
(array([ 0.40824829, -0.40824829]), array([[ 0.77459667,  0.77459667],
       [-0.63245553,  0.63245553]]))
Initial
M inverse: 
[[0.5        0.        ]
 [0.         0.33333333]]
P (inv(M)*N): 
[[ 0.         -0.5       ]
 [-0.33333333  0.        ]]
z (inv(M) b)
[[3.5       ]
 [2.66666667]]
Result after last iteration
[[2.59999071]
 [1.79999357]]
import numpy as np
from numpy import linalg as LA


A = np.array([[2, -1, 0, 0], [-1, 2, -1, 0], [0, -1, 2, -1], [0, 0, -1, 2]])
b = np.array([[25], [-24], [21], [-15]])

x = np.array([[11], [-3], [7], [-4]])

el, xl = solve(A, b, x, 60)

print("Result after last iteration")
print(xl[-1])
Eigen stuff of P
(array([-0.80901699, -0.30901699,  0.80901699,  0.30901699]), array([[ 0.37174803,  0.60150096, -0.37174803, -0.60150096],
       [-0.60150096, -0.37174803, -0.60150096, -0.37174803],
       [ 0.60150096, -0.37174803, -0.60150096,  0.37174803],
       [-0.37174803,  0.60150096, -0.37174803,  0.60150096]]))
Initial
M inverse: 
[[0.5 0.  0.  0. ]
 [0.  0.5 0.  0. ]
 [0.  0.  0.5 0. ]
 [0.  0.  0.  0.5]]
P (inv(M)*N): 
[[0.  0.5 0.  0. ]
 [0.5 0.  0.5 0. ]
 [0.  0.5 0.  0.5]
 [0.  0.  0.5 0. ]]
z (inv(M) b)
[[ 12.5]
 [-12. ]
 [ 10.5]
 [ -7.5]]
Result after last iteration
[[10.99998147]
 [-2.99998811]
 [ 6.99997002]
 [-3.99999265]]
import numpy as np
from numpy import linalg as LA

# Above method does not work for this as A is not diagonally dominant matrix
A = np.array([[2, 3], [3, 2]])
b = np.array([[7], [8]])

x = np.array([[2], [1]])

el, xl = solve(A, b, x)

print("Result after last iteration")
print(xl[-1])
Eigen stuff of P
(array([ 1.5, -1.5]), array([[ 0.70710678,  0.70710678],
       [-0.70710678,  0.70710678]]))
Initial
M inverse: 
[[0.5 0. ]
 [0.  0.5]]
P (inv(M)*N): 
[[ 0.  -1.5]
 [-1.5  0. ]]
z (inv(M) b)
[[3.5]
 [4. ]]
Result after last iteration
[[-7.35369374e+10]
 [-3.67684687e+10]]
A = np.array([[4, 2, 3], [3, -5, 2], [-2, 3, 8]])
b = np.array([[8], [-14], [27]])
x = np.array([[-1], [3], [2]])
e, xl = solve(A, b, x)

print("Result after last iteration")
print(xl[-1])
Eigen stuff of P
(array([-0.08875095+0.81309913j, -0.08875095-0.81309913j,
        0.1775019 +0.j        ]), array([[ 0.66666828+0.j        ,  0.66666828-0.j        ,
        -0.55176491+0.j        ],
       [-0.22848143-0.58075007j, -0.22848143+0.58075007j,
        -0.6277451 +0.j        ],
       [ 0.23121087-0.33558982j,  0.23121087+0.33558982j,
         0.54908249+0.j        ]]))
Initial
M inverse: 
[[ 0.25   0.     0.   ]
 [-0.    -0.2   -0.   ]
 [ 0.     0.     0.125]]
P (inv(M)*N): 
[[ 0.    -0.5   -0.75 ]
 [ 0.6    0.     0.4  ]
 [ 0.25  -0.375  0.   ]]
z (inv(M) b)
[[2.   ]
 [2.8  ]
 [3.375]]
Result after last iteration
[[-0.99999138]
 [ 2.99997978]
 [ 1.99999301]]

Convergence Analysis of error continued

What if $e_0$ = an eigenvector of P, say $v_0$ with eigenvalue $\lambda_0$?

$\begin{aligned} e_{i+1} &= P^k v_0 \\ &= P^{k-1} P v_0 \\ &= P^{k-1} \lambda_0 v_0 \\ &= \lambda_0 P^{k-1} v_0 \\ &= \lambda^{k}_0 v_0 \end{aligned}$

If $ 0 \le \lambda_0 \lt 1$, error goes to zero.

What if $e_0$ is not an eigenvector?

It can be written as a linear combination of eigenbasis (orthonormal eigenvectors of P), say Q.

Let $v_1, v_2, \dots, v_n$ be the orthonormal eigenvectors of P. Let $\lambda_1, \lambda_2, \dots, \lambda_n$ be corresponding eigenvalues.

$Q = [v_1 \dots v_n]$

$e_0 = c_1 v_1 + c_2 v_2 + \dots + c_n v_n$

$e_{i+1} = P^{i+1} e_0 = P^{i+1} (c_1 v_1 + c_2 v_2 + \dots + c_n v_n) = c_1 \lambda_1^{i+1} v_1 + c_2 \lambda_2^{i+1} v_2 + \dots + c_n \lambda_n^{i+1} v_n$.

Iff $|\lambda_i| \lt 1$ for all i, $e_k -> 0$.

Also,

Spectral radius, $\rho = max_i(|\lambda_i|)$.

$||e_k|| -> ||c_j \lambda_j^k v_j||$ ~ $\rho^k$ for some j and reaches zero if $\rho \lt 1$.

Using matrices:

P = $QDQ^{-1}$ where D is the diagonal matrix of eigenvalues.

$e_{i+1} = (QDQ^{-1})^{i+1}e_0 = QD^{i+1}Q^{-1}e_0$. So, if eigenvalues < 1, error goes to zero.

If P does not have full matrix, you can use Jordan form of P as matrix similarity extends to Jordan form as well (see this reference for example: https://math.la.asu.edu/~gardner/iter.pdf).

Convergence Analysis of solution

How about convergence of $x_{i+1}$?

Let $x^*$ be the (unknown) solution. Then it satisfies $x^* = Px^* + z$ by definition.

Consider,

$x_{i+1} - x^* = (Px_i+z)-(Px^*+z) = P(x_i-x^*) = P((Px_{i-1}+z) - (Px^* + z)) = P^2(x_{i-1}-x^*) = \dots = P^{i+1}(x_0-x^*)$.

So, if the $\rho(P) \lt 1$, then $||x_{i+1} - x^*|| = ||P^{i+1}(x_0-x^*)||$ ~ $0$ or $||x_{i+1}||$ ~ $||x^*||$.

from numpy.linalg import matrix_power

A = np.array([[4, 2, 3], [3, -5, 2], [-2, 3, 8]])
b = np.array([[8], [-14], [27]])
x_solution = np.array([[-1], [3], [2]])
x_0 = np.zeros((3, 1))


M = np.diag(np.diag(A))
N = M - A
P = LA.inv(M).dot(N)

# consider 11th iteration
i = 11
# grab x_11 from the output above
x_11 = np.array([[-0.66699081], [ 2.85785307], [ 2.09930262]])

lhs = x_11 - x_solution
rhs = (matrix_power(P, i+1)).dot(x_0-x_solution)
print(lhs)
print(rhs)
[[ 0.33300919]
 [-0.14214693]
 [ 0.09930262]]
[[ 0.33300919]
 [-0.14214693]
 [ 0.09930262]]

Now you could see that, the rate at which each component of $x_{i+1}$ converges depends on the corresponding eigenvecgtor of $P$. For,

$x_{i+1} - x^* = QD^{i+1}Q^{-1} (x_{0} - {x^*})$

Let's simplify as we are using $x_0 = 0$ most of the time.

$x_{i+1} = x^* - QD^{i+1}Q^{-1} {x^*}$

Let's try to study how individual component of $x_{i+1}$ progresses towards the corresponding component in the solution as we iterate.

Let $q_{(j)}$ be j-th row of Q and $q'_{(j)}$ be j-th row of $Q^{-1}$. Then after mundane manipulations of above equation, we see that

${x_{i+1}}_{(j)} = q_{(j)} \sum^{n}_{k=1} {\lambda_k}^{i+1} q'_k x^*$

This is way too complex to see what is going on - there is a light at the end of tunnel, though - we just need to change our frame of reference to that of eigenbasis of P.

Let

$Q^{-1}v = w$

Rewrite above using w. First multiply both sides with $Q^{-1}$:

$Q^{-1}x_{i+1} = Q^{-1}x^* - D^{i+1}Q^{-1} {x^*}$

So,

$w_{i+1} = w^* - D^{i+1} w^*$

Now, look at the j-th component of $w_{i+1}$:

${w_{i+1}}_{(j)} = {w^*}_{(j)} - {\lambda_j}^{i+1} {w^*}_{(j)} = (1-{\lambda_j}^{i+1}) {w^*}_{(j)}$.

Now, we clearly see that each component of $w$ takes its own progression towards meeting $w^*$ as long as $\lambda_j \lt 1$. Also, note that it is in closed form, you do not need to loop to find value at any i unlike for $x_{i+1}$ which you cannot compute without looping.

Let's test above theory. First, a couple of utilities:

def analyze(A, b, x, num_iter = 60):
    
    variable_count = len(x)
    x0 = np.zeros((variable_count, 1))
    
    e0 = x0-x
    e = [e0]

    ec = e0
    xl = [x0]
    xc = x0
    
    M = np.diag(np.diag(A))

    N = M - A
    P = LA.inv(M).dot(N)
    z = LA.inv(M).dot(b)

    eigvals, eigvecs = LA.eig(P)
    
    D = np.diag(eigvals)
    Q = np.array(eigvecs)
    Qinv = LA.inv(Q)
    
    wstar = Qinv.dot(x)
    
    d0 = Qinv.dot(e0)
    dc = d0
    dl = [d0]

    print("wstar")
    print(wstar)
    print("-----------")
    wl = []
    Dc = D
    
    for i in range(num_iter):
        en = P.dot(ec)
        e.append(en)
        ec = en
        
        dn = Qinv.dot(en)
        dl.append(dn)
        dc = dn
        
        xn = P.dot(xc) + z
        xl.append(xn)
        wl.append(wstar - Dc.dot(wstar))
        Dc = Dc * D
        xc = xn

    return (e, dl, xl, np.array(wl))
import matplotlib.pyplot as plt

def plot_points(points):
    plt.scatter(np.real(points), np.imag(points))
    plt.show()


    x = np.real(points)
    y = np.imag(points)
    n = [i for i in range(len(x))]

    fig, ax = plt.subplots(figsize=(20,10))
    ax.scatter(x, y)

    for i, txt in enumerate(n):
        ax.annotate(txt, (x[i], y[i]))

    plt.xticks(np.arange(min(x)-0.2,max(x)+0.2, 0.1)) 
    plt.yticks(np.arange(min(y)-0.2,max(y)+0.2, 0.1)) 

    plt.show()

def print_error_contributions(dl, eigvals):
    c = len(dl[0])
    n = len(dl)
    x = [i for i in range(n)]
    
    y_prev_min = [0  for k in range(n)]
    y_prev_max = [0  for k in range(n)]
    colors = ['red', 'blue', 'black']
    
    fig, vax = plt.subplots(1, 1, figsize=(24, 12))

    for comp in range(c):
        y_min = y_prev_max
        y_max = [np.absolute(dl[k][comp])[0]+y_min[k] for k in range(n)]
        vax.vlines(x, y_min, y_max, color=colors[comp], label=np.absolute(eigvals[comp]))
        y_prev_min = y_min
        y_prev_max = y_max
    
    plt.legend()
    plt.show()
tol = 1e-13

A = np.array([[4, 2, 3], [3, -5, 2], [-2, 3, 8]])
b = np.array([[8], [-14], [27]])
x_solution = np.array([[-1], [3], [2]])
x_0 = np.zeros((3, 1))


M = np.diag(np.diag(A))
N = M - A
P = LA.inv(M).dot(N)

w, v = LA.eig(P)

D = np.diag(w)
print("Diagonal")
print(D)
print("---")
Q = np.array(v)
print("Eigenbasis")
print(Q)
print("---")

Qinv = LA.inv(Q)


e, dl, xl, wl = analyze(A, b, x_solution)

wl.real[abs(wl.real) < tol] = 0.0
wl.imag[abs(wl.imag) < tol] = 0.0

w_lastiter = wl[-1]
x_lastiter = Q.dot(w_lastiter)

print("value of x after last iteration")
print(x_lastiter)
print(xl[-1])
Diagonal
[[-0.08875095+0.81309913j  0.        +0.j          0.        +0.j        ]
 [ 0.        +0.j         -0.08875095-0.81309913j  0.        +0.j        ]
 [ 0.        +0.j          0.        +0.j          0.1775019 +0.j        ]]
---
Eigenbasis
[[ 0.66666828+0.j          0.66666828-0.j         -0.55176491+0.j        ]
 [-0.22848143-0.58075007j -0.22848143+0.58075007j -0.6277451 +0.j        ]
 [ 0.23121087-0.33558982j  0.23121087+0.33558982j  0.54908249+0.j        ]]
---
wstar
[[-0.47309829+2.75837488e+00j]
 [-0.47309829-2.75837488e+00j]
 [ 0.66912693+2.82446075e-16j]]
-----------
value of x after last iteration
[[-0.99999138+0.j]
 [ 2.99997978+0.j]
 [ 1.99999301+0.j]]
[[-0.99999138]
 [ 2.99997978]
 [ 1.99999301]]
plot_points([item[0][0] for item in wl])
wl_2 = [item[1][0] for item in wl]
plot_points(wl_2)

Notice the spiraling movement of w's to convergence due to presence of imaginary component

wl_3 = [item[2][0] for item in wl]
plot_points(wl_3)
print_error_contributions(dl, w)
print([np.absolute(item[0][0]) for item in dl])
[2.798652169004236, 2.2890971880511395, 1.872317680052403, 1.5314218694319996, 1.2525934926325977, 1.0245318348284158, 0.837993720030255, 0.6854188917689926, 0.5606236013044015, 0.45854998470839503, 0.3750610712550078, 0.3067731149536745, 0.2509184537426817, 0.20523333812392872, 0.1678661830137287, 0.13730252432274404, 0.11230363880887148, 0.09185633950958826, 0.07513191199851137, 0.06145252718200072, 0.050263769370455534, 0.0411121662066704, 0.03362680975530614, 0.02750432386936806, 0.022496568571799172, 0.01840058311956207, 0.01505036014978504, 0.012310117519994572, 0.010068795154928024, 0.008235553861060674, 0.006736093679017377, 0.005509642559323132, 0.004506493314673282, 0.003685989023159045, 0.0030148752322811316, 0.002465952179757755, 0.00201697240659974, 0.0016497390834985244, 0.0013493685063400318, 0.0011036868703147007, 0.0009027368743094845, 0.0007383741586104869, 0.0006039372198247381, 0.0004939774249088327, 0.0004040381819659562, 0.000330474317761553, 0.000270304341457427, 0.0002210896069190208, 0.00018083547613054456, 0.0001479104779418167, 0.00012098018570860147, 9.895313393446755e-05, 8.09365819543167e-05, 6.620033179329207e-05, 5.414713376474904e-05, 4.4288480367325094e-05, 3.6224807425058966e-05, 2.962930003691752e-05, 2.4234647002440263e-05, 1.982220688983876e-05, 1.6213146655035137e-05]
print([np.absolute(item[1][0]) for item in dl])
[2.798652169004236, 2.2890971880511395, 1.8723176800524033, 1.5314218694319996, 1.252593492632598, 1.0245318348284158, 0.837993720030255, 0.6854188917689925, 0.5606236013044015, 0.45854998470839503, 0.3750610712550078, 0.3067731149536745, 0.2509184537426817, 0.20523333812392872, 0.16786618301372866, 0.13730252432274404, 0.11230363880887148, 0.09185633950958826, 0.07513191199851135, 0.06145252718200072, 0.05026376937045553, 0.04111216620667041, 0.03362680975530614, 0.02750432386936806, 0.022496568571799175, 0.01840058311956207, 0.015050360149785039, 0.012310117519994572, 0.010068795154928024, 0.008235553861060674, 0.006736093679017377, 0.005509642559323131, 0.004506493314673281, 0.0036859890231590448, 0.003014875232281132, 0.002465952179757755, 0.00201697240659974, 0.0016497390834985242, 0.0013493685063400316, 0.0011036868703147007, 0.0009027368743094846, 0.0007383741586104869, 0.0006039372198247381, 0.0004939774249088327, 0.0004040381819659562, 0.0003304743177615531, 0.00027030434145742697, 0.00022108960691902082, 0.0001808354761305445, 0.00014791047794181672, 0.00012098018570860146, 9.895313393446755e-05, 8.093658195431669e-05, 6.620033179329207e-05, 5.4147133764749035e-05, 4.42884803673251e-05, 3.6224807425058966e-05, 2.9629300036917512e-05, 2.4234647002440263e-05, 1.982220688983876e-05, 1.621314665503514e-05]
print([np.absolute(item[2][0]) for item in dl])
[0.6691269288904338, 0.1187713010249305, 0.021082131562910722, 0.00374211840234584, 0.0006642331253550182, 0.00011790264160016495, 2.0927942864748528e-05, 3.714749615801871e-06, 6.593751137699833e-07, 1.1704033509030154e-07, 2.0774881828722336e-08, 3.687581129341311e-09, 6.545526343160191e-10, 1.1618425665993676e-10, 2.0622961671712406e-11, 3.6606759290804006e-12, 6.497302695921548e-13, 1.1530013092596007e-13, 2.049055399636148e-14, 3.6533318914677856e-15, 6.314481135683715e-16, 1.0283025326360291e-16, 3.134881669300665e-17, 7.969592015545964e-18, 8.935373837186346e-18, 6.9282887019832755e-19, 8.812661463699985e-18, 1.8673160853157835e-19, 4.4683740379945e-18, 2.6029252338254693e-18, 2.281162689856478e-18, 1.312097709798084e-18, 1.790569823012601e-18, 8.884593494774184e-19, 6.516035922184243e-19, 8.849165508992531e-19, 2.4398084866490184e-19, 8.790712237024587e-19, 2.027098600242056e-19, 4.470075600200349e-19, 1.121055986803255e-19, 2.8192285445507893e-19, 2.76201182912248e-20, 1.2060032019151485e-19, 5.452195462514497e-20, 1.3979849631489435e-19, 8.192580043583475e-20, 4.5841264822001056e-20, 5.520586729152878e-20, 4.573452570427438e-20, 4.167859354643055e-20, 1.7790799125307778e-20, 3.466035969235535e-20, 5.966257020288272e-21, 2.764623774768838e-20, 3.569907431211514e-21, 2.06885292569832e-20, 5.082687364439243e-21, 1.213638136781469e-20, 3.4129903461847316e-21, 6.1557787183658335e-21]
# A is diagonally dominant, so convergence is possible.
# Since there are no imaginary numbers, no spiraling but linear movement of w's
# https://www.maa.org/press/periodicals/loci/joma/iterative-methods-for-solving-iaxi-ibi-jacobis-method

A = np.array([[4, -1, -1], [-2, 6, 1], [-1, 1, 7]])
b = np.array([[3], [9], [-6]])
x_solution = np.array([[1], [2], [-1]])
e, dl, xl, wl = analyze(A, b, x_solution, 50)

M = np.diag(np.diag(A))
N = M - A
P = LA.inv(M).dot(N)
z = LA.inv(M).dot(b)
w, v = LA.eig(P)

D = np.diag(w)
print("Eigenvalues in a diagonal matrix")
print(D)
print("---")
Q = np.array(v)
print("Eigenbasis")
print(Q)
print("---")

Qinv = LA.inv(Q)


print("value of x after last iteration:")
print(xl[-1])


print("-------")

w_lastiter = wl[-1]
x_from_w = Q.dot(w_lastiter)

print("value of x after last iteration via w")
print(x_from_w)
wstar
[[ 0.09126133]
 [-1.62567469]
 [-1.14708505]]
-----------
Eigenvalues in a diagonal matrix
[[-0.42946179  0.          0.        ]
 [ 0.          0.28202928  0.        ]
 [ 0.          0.          0.14743251]]
---
Eigenbasis
[[ 0.62737263 -0.61970267  0.05639495]
 [-0.65211772 -0.78059393 -0.68915668]
 [-0.42561257  0.08149674  0.72241448]]
---
value of x after last iteration:
[[ 1.]
 [ 2.]
 [-1.]]
-------
value of x after last iteration via w
[[ 1.]
 [ 2.]
 [-1.]]
plot_points([item[0][0] for item in wl])
plot_points([item[1][0] for item in wl])
plot_points([item[2][0] for item in wl])
print_error_contributions(dl, w)
print([np.absolute(item[0][0]) for item in dl])
[0.09126132739786263, 0.03919325273556447, 0.01683200435269831, 0.007228702665756757, 0.003104451563520552, 0.0013332433156920064, 0.0005725770566771637, 0.00024589996587601074, 0.00010560463873409491, 4.5353156851522314e-05, 1.9477447781228038e-05, 8.364819527612026e-06, 3.592370341095475e-06, 1.5427857857522183e-06, 6.625675402924254e-07, 2.8454743976987896e-07, 1.2220225193020754e-07, 5.248119747234745e-08, 2.2538668843063196e-08, 9.67949699480334e-09, 4.1569740753009465e-09, 1.78526151431232e-09, 7.667015999501948e-10, 3.2926903910356256e-10, 1.4140846989131924e-10, 6.07295341567624e-11, 2.608101425418065e-11, 1.1200798984739615e-11, 4.810315146253706e-12, 2.0658465380731705e-12, 8.872021456208796e-13, 3.8101941876498335e-13, 1.6363328041143228e-13, 7.027424099537046e-14, 3.018010110815074e-14, 1.2961200149542785e-14, 5.566340176081721e-15, 2.390530398294518e-15, 1.0266414563963668e-15, 4.409032743292475e-16, 1.893511080261688e-16, 8.131906519697046e-17, 3.492343104533209e-17, 1.499827910003451e-17, 6.4411877421361995e-18, 2.766243997243003e-18, 1.1879960899486444e-18, 5.10198923572861e-19, 2.1911094137200286e-19, 9.40997764023488e-20, 4.0412258116944346e-20]
print([np.absolute(item[1][0]) for item in dl])
[1.6256746930324613, 0.45848786611403614, 0.12930700359349495, 0.03646836135499692, 0.010285145761320065, 0.0029007122722585787, 0.0008180857988493498, 0.00023072415029908153, 6.507096640243248e-05, 1.8351917920416114e-05, 5.1757782307210415e-06, 1.459721017158865e-06, 4.116840700955752e-07, 1.1610696261696217e-07, 3.274556327867394e-08, 9.235207693573612e-09, 2.604598993079086e-09, 7.345731833913359e-10, 2.071711473403403e-10, 5.842832989377841e-11, 1.6478499916631844e-11, 4.647419496604932e-12, 1.31070838284425e-12, 3.6965814386087795e-13, 1.0425442082408956e-13, 2.9402799429343783e-14, 8.292450405925919e-15, 2.3387138323321306e-15, 6.595857824655651e-16, 1.860225045131712e-16, 5.2463793346307113e-17, 1.4796325957830212e-17, 4.1729971831730284e-18, 1.1769073985050732e-18, 3.3192234834681225e-19, 9.361182151543574e-20, 2.640127479072266e-20, 7.445932567440602e-21, 2.0999710144070652e-21, 5.922533169006352e-22, 1.670327776360483e-22, 4.7108134307448195e-23, 1.328587328642571e-23, 3.747005298824538e-24, 1.056765214219347e-24, 2.980387342369092e-25, 8.405565030100302e-26, 2.3706154625033054e-26, 6.685829799087391e-27, 1.885599767645552e-27, 5.317943529940371e-28]
print([np.absolute(item[2][0]) for item in dl])
[1.147085054406893, 0.16911762309054326, 0.02493343482239269, 0.003675998755668806, 0.0005419617051535398, 7.990277183853153e-05, 1.178026581356281e-05, 1.736794099191309e-06, 2.5605990482090304e-07, 3.775155321370535e-08, 5.565806060282749e-09, 8.205807301561281e-10, 1.2098027265322058e-10, 1.7836424660081822e-11, 2.629668769100653e-12, 3.876986540575399e-13, 5.715938378917869e-14, 8.42715113756738e-15, 1.24243600527847e-15, 1.8317545246125232e-16, 2.7006015977396637e-17, 3.981564395352544e-18, 5.87011978721918e-19, 8.654462461836722e-20, 1.2759506517874804e-20, 1.881157346961903e-21, 2.7734622768821427e-22, 4.0887279184554735e-23, 6.02896728690383e-24, 8.887748720326843e-25, 1.3111499333497393e-25, 1.931131495981037e-26, 2.8525210332791787e-27, 4.165185579566942e-28, 6.626431603856499e-29, 9.466330862652142e-30, 1.9721522630525295e-30, 1.9721522630525295e-31, 9.860761315262648e-32, 2.465190328815662e-32, 2.465190328815662e-32, 0.0, 3.0814879110195774e-33, 1.5407439555097887e-33, 1.1555579666323415e-33, 3.851859888774472e-34, 0.0, 4.81482486096809e-35, 2.407412430484045e-35, 1.2037062152420224e-35, 0.0]

$$ \begin{array}{l} \text{M}\to \left( \begin{array}{ccc} 0 && -0.5 && -0.75 \\ 0.6 && 0 && 0.4 \\ 0.25 && -0.375 && 0 \\ \end{array} \right) \\ M=P.D.P^{-1} \\ \text{D}\to \left( \begin{array}{ccc} -0.0887509+0.813099 i && 0.\, +0. i && 0.\, +0. i \\ 0.\, +0. i && -0.0887509-0.813099 i && 0.\, +0. i \\ 0.\, +0. i && 0.\, +0. i && 0.177502\, +0. i \\ \end{array} \right) \\ \text{P}\to \left( \begin{array}{ccc} 0.666668\, +0. i && 0.666668\, +0. i && -0.551765+0. i \\ -0.228481-0.58075 i && -0.228481+0.58075 i && -0.627745+0. i \\ 0.231211\, -0.33559 i && 0.231211\, +0.33559 i && 0.549082\, +0. i \\ \end{array} \right) \\ \end{array} $$