20.1 INTRODUCTION

Solving systems of linear equations is found in almost all areas of engineering and scientific applications. A system of linear equations is generally expressed in matrix form as

(20.1) c20e001

where A is the system matrix, which is an n × n matrix, x is the unknown vector of n components, and b is a vector of constants. Techniques for solving linear systems could be direct or iterative. Direct techniques are appropriate for small systems (small values of n) where computational errors will be small. Iterative techniques are more appropriate for large systems where an assumed solution is refined after each iteration while suppressing computational noise. Table 20.1 summarizes the different direct and indirect techniques used to solve linear systems. Reference 129 explains in detail how such techniques are used.

Table 20.1 Direct and Indirect Techniques Used to Solve Linear Systems

Direct techniquesComment
Forward substitutionSystem matrix lower triangular
Back substitutionSystem matrix upper triangular
LU factorizationConvert system matrix to equivalent triangular system. L is lower triangular matrix and U is upper triangular matrix.
Gaussian eliminationConvert system matrix to equivalent triangular system
LDMt factorizationConvert system matrix to three special matrices. L is lower triangular matrix, D is diagonal matrix, and M is a Gaussian transformation matrix such that the product MA produces an upper triangular matrix.
LDLt factorizationConvert system matrix to three special matrices when system matrix is symmetric
Positive definite systemsSystem matrix is positive definite
Banded systemsSystem matrix is banded
Symmetric indefinite systemsSystem matrix is symmetric
Block tridiagonal systemsSystem matrix has special block structure
Vandermonde systemsSystem matrix has 1s in its first row
Toeplitz systemsSystem matrix is Toeplitz
Indirect techniquesComment
JacobiUsed when system matrix has nonzero diagonal elements
Gauss–SeidelLike Jacobi but uses most recently available estimates
Successive over relaxation (SOR)Like Gauss–Seidel but could accelerate convergence
Chebyshev semi-iterativeLike Gauss-Seidel but could accelerate convergence
Conjugate gradientUsed when SOR or Chebyshev methods prove difficult

A comprehensive discussion on parallel matrix computations can be found in the standard textbook of Golub and van Horn [129]. We provide here a brief introduction and tie the algorithms to the techniques we discussed in Chapters 7, 8, 10, and 11.

Typically, the system matrix will have some structure due to the nature of the application. Before we start, we define some of these structures in the following section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.186.202