WS04: LU Factorisation and iterative methods#

These exercises are indented to give you practice at using the material on numerical approximation and are intended to reinforce the material that was covered in lectures.

Please attempt the worksheet before your tutorial. Support is available in your tutorial or in the Class Team.

Partial solutions are available to check your understanding. Please create Issues and Pull requests with your solutions or share your answers for peer feedback in the class team Worksheet solutions channel

Part a (pen and paper warm up)#

1. LU factorisation#

Use LU factorisation to solve the linear system of equations given by

\[\begin{split} \begin{pmatrix} 4 & -1 & -1 \\ 2 & 4 & 2 \\ 1 & 2 & 4 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 9 \\ -6 \\ 3 \end{pmatrix}. \end{split}\]

2. Jacobi iteration#

Rewrite the Jacobi iteration as:

\[ \vec{x}^{(k+1)} = D^{-1} \vec{b} - D^{-1}(L+U) \vec{x}^{(k)}, \]

or

\[ \vec{x}^{(k+1)} = \vec{p} - P \vec{x}^{(k)}, \]

with \(P=D^{-1}(L+U)\) and \(\vec{p}=D^{-1} \vec{b}\), which facilitates computing and coding. Apply the above Jacobi iteration to the following linear system of equations. Take three iterations starting from an initial guess \((0, 0)^T\).

\[\begin{split} \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} 3 \\ 3 \end{pmatrix}. \end{split}\]

Note that the exact solution is \((1,1)^T\). If we want to reduce the absolute error between \(\vec{x}^*\), the exact solution, and \(\vec{x}\), our computed solution, to less than 0.1, how many iterations do we need? what is the corresponding residual? Recall the absolute error is \(\|\vec{x}^* - \vec{x}\|\).

3. Gauss-Seidel iteration#

Apply Gauss-Seidel iteration to the above equations, and take three iterations and check whether it gives you more accurate approximation than Jacobi iteration – all you need to do is to update \((x_1, x_2)\) in terms of components, i.e.: update \(x_1\) first, then update \(x_2\) using your new \(x_1\).

Part b (code implementations and testing)#

4. Implement Gauss-Seidel iteration and Jacobi iteration#

Notice the only difference between Gauss-Seidel iteration and Jacobi iteration is that the former updates vector \(\vec{x}\) component by component, while latter updates vector \(\vec{x}\) as a whole.

import numpy as np
def Gauss_Seidel_iteration(A, b, max_iteration, x0=None):
    # To ensure that arrays are stored in double precision.
    A = A.astype(np.float64)
    b = b.astype(np.float64)

    n = len(b)  # dimensions of the linear system of equations

    for i in range(n):
        if np.abs(A[i, i]) < 1.0e-15:
            print("Diagonal element (%f %f)is zero!" % (i, i))
            return

    np.zeros([n, n])  # matrix P = D^{-1}(L+U)
    np.zeros(n)  # vector p = D^{-1} b
    for i in range(n):
        """
        Compute the matrix P and vector p...
        """

    # create a new array to store the results, initialised as zero
    x = np.zeros(n)
    for it in range(max_iteration):
        """
        Implement Gauss-Seidel iteration -- update x component by component.
        """

    return x
def Jacobi_iteration(A, b, max_iteration, x0=None):
    # To ensure that arrays are stored in double precision.
    A = A.astype(np.float64)
    b = b.astype(np.float64)

    n = len(b)  # dimensions of the linear system of equations

    for i in range(n):
        if np.abs(A[i, i]) < 1.0e-15:
            print("Diagonal element (%f %f)is zero!" % (i, i))
            return

    np.zeros([n, n])  # matrix P = D^{-1}(L+U)
    np.zeros(n)  # vector p = D^{-1} b
    for i in range(n):
        """
        Compute the matrix P and vector p...
        """

    # create a new array to store the results, initialised as zero
    x = np.zeros(n)
    for it in range(max_iteration):
        """
        Implement Jacobi iteration -- update x as a vector.
        """

    return x

5. Test your iterative methods#

# Test different linear solvers starting from the above two-dimensional linear system
A = np.array([[2, 1], [1, 2]])
b = np.array([3, 3])
x_exact = np.array([1, 1])

# numpy linear solver
x0 = np.linalg.solve(A, b)
print("Solution by numpy solver:", x0)

x = Jacobi_iteration(A, b, 4)
print("Solution by Jacobi iteration: ", x)
print("Error: ", x - x_exact)
print("Residual: ", np.matmul(A, x) - b)

x = Gauss_Seidel_iteration(A, b, 4)
print("Solution by Gauss Seidel iteration: ", x)
print("Error: ", x - x_exact)
print("Residual: ", np.matmul(A, x) - b)
A = np.array(
    [[-10, 2, 0, 67], [-2, 50, -77, 1.0e-5], [1, 7, 30, 8], [-10, -7, 0.001, 80]]
)
b = np.array([1, 2, 9, 0])

# numpy linear solvers
x0 = np.linalg.solve(A, b)
# x0 = np.linalg.inv(A).dot(b)
print("Solution by numpy solver:", x0)

x = Jacobi_iteration(A, b, 100)
print("Solution by Jacobi iteration: ", x)
print("Residual: ", np.matmul(A, x) - b)

x = Gauss_Seidel_iteration(A, b, 100)
print("Solution by Gauss Seidel iteration: ", x)
print("Residual: ", np.matmul(A, x) - b)
n = 20
B = np.random.rand(n, n)
eps = 10
A = eps * np.eye(n) + B * B.T
b = np.random.rand(n)

# numpy linear solvers
x0 = np.linalg.solve(A, b)
# x0 = np.linalg.inv(A).dot(b)
print("Solution by numpy solver:", x0)

x = Jacobi_iteration(A, b, 100)
print("Solution by Jacobi iteration: ", x)
print("Residual: ", np.matmul(A, x) - b)

x = Gauss_Seidel_iteration(A, b, 100)
print("Solution by Gauss Seidel iteration: ", x)
print("Residual: ", np.matmul(A, x) - b)

Part c: Extension#

6. Convergence#

For the following iteration formulas:

\[\begin{split} \begin{aligned} \vec{x}^{(k+1)} & = \begin{pmatrix} 3 \\ 1 \end{pmatrix} - \begin{pmatrix} 2 & 0 \\ 0 & 2 \end{pmatrix} \vec{x}^{(k)} \\ \vec{x}^{(k+1)} & = \begin{pmatrix} 3 \\ 1 \end{pmatrix} - \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \vec{x}^{(k)} \\ \vec{x}^{(k+1)} & = \begin{pmatrix} 3 \\ 1 \end{pmatrix} - \begin{pmatrix} 1/2 & 0 \\ 0 & 1/2 \end{pmatrix} \vec{x}^{(k)}. \end{aligned} \end{split}\]
  • Take two or three iterations from an arbitrary initial guess. Do you think which iteration can converge?

  • Do you think the convergence of Jacobi iteration is completely determined by the properties of matrix \(P\)?

  • Wikipedia: Eigenvalue decomposition is the right tool to decompose a matrix to diagonal matrix, which allows us to analyse the convergence of an iteration method but is beyond the scope of this module.