In Section 6.6, we saw that a symmetric matrix is positive definite if and only if its eigenvalues are all positive. These types of matrices occur in a wide variety of applications. They frequently arise in the numerical solution of boundary value problems by finite difference methods or by finite element methods. Because of their importance in applied mathematics, we devote this section to studying their properties.
Recall that a symmetric matrix A is positive definite if for all nonzero vectors x in . In Theorem 6.6.2, symmetric positive definite matrices were characterized by the condition that all their eigenvalues are positive. This characterization can be used to establish the following properties:
Property I If A is a symmetric positive definite matrix, then A is nonsingular.
Property II If A is a symmetric positive definite matrix, then .
If A were singular, would be an eigenvalue of A. However, since all the eigenvalues of A are positive, A must be nonsingular. The second property also follows from Theorem 6.6.2, since
Given an matrix A, let denote the matrix formed by deleting the last rows and columns of is called the leading principal submatrix of A of order r. We can now state a third property of positive definite matrices:
Property III If A is a symmetric positive definite matrix, then the leading principal submatrices of A are all positive definite.
Proof
To show that is positive definite, , let be any nonzero vector in and set
Since
it follows that is positive definite.
An immediate consequence of properties I, II, and III is that if is a leading principal submatrix of a symmetric positive definite matrix A, then is nonsingular and This has significance in relation to the Gaussian elimination process. In general, if A is an matrix whose leading principal submatrices are all nonsingular, then A can be reduced to upper triangular form using only row operation III; that is, the diagonal elements will never be 0 in the elimination process, so the reduction can be completed without interchanging rows.
Property IV If A is a symmetric positive definite matrix, then A can be reduced to upper triangular form using only row operation III, and the pivot elements will all be positive.
Let us illustrate property IV in the case of a symmetric positive definite matrix A. Note first that
so can be used as a pivot element and row 1 is the first pivot row. Let denote the entry in the (2, 2) position after the last three elements of column 1 have been eliminated (see Figure 6.7.1). At this step, the submatrix has been transformed into a matrix:
Since the transformation was accomplished using only row operation III, the value of the determinant remains unchanged. Thus,
and hence
Since , it can be used as a pivot in the second step of the elimination process. After step 2, the matrix has been transformed into
Because only row operation III was used,
and hence
Thus, can be used as a pivot in the last step. After step 3, the remaining diagonal entry will be
In general, if an matrix A can be reduced to an upper triangular form U without any interchanges of rows, then A can be factored into a product LU, where L is lower triangular with 1’s on the diagonal. The entry of L below the diagonal will be the multiple of the ith row that was subtracted from the jth row during the elimination process. We illustrate with a 3 × 3 example:
Let
The matrix L is determined as follows: At the first step of the elimination process, times the first row is subtracted from the second row and times the first row is subtracted from the third. Corresponding to these operations, we set and . After step 1, we obtain the matrix
The final elimination is carried out by subtracting times the second row from the third row. Corresponding to this step, we set . After step 2, we end up with the upper triangular matrix
The matrix L is given by
and we can verify that the product .
To see why this factorization works, let us view that process in terms of elementary matrices. Row operation III was applied three times during the process. This is equivalent to multiplying A on the left by three elementary matrices . Thus, :
Since the elementary matrices are nonsingular, it follows that
When the inverse elementary matrices are multiplied in this order, the result is a lower triangular matrix L with 1’s on the diagonal. The entries below the diagonal of L will just be the multiples that were subtracted during the elimination process.
∎
Given an LU factorization of a matrix A, it is possible to go one step further and factor U into a product , where D is diagonal and is upper triangular with 1’s on the diagonal:
It follows, then, that . The matrices L and are referred to as unit triangular matrices since they are triangular and their diagonal entries are all equal to 1. The representation of a square matrix A as a product of the form LDU, where L is a unit lower triangular matrix, D is diagonal, and U is a unit upper triangular matrix, is referred to as an LDU factorization of A. In general, if A has an LDU factorization, then it is unique (see Exercise 8 at the end of this section).
If A is a symmetric positive definite matrix, then A can be factored into a product . The diagonal elements of D are the entries , which were the pivot elements in the elimination process. By property IV, these elements are all positive. Furthermore, since A is symmetric,
It follows from the uniqueness of the LDU factorization that . Thus,
This important factorization is often used in numerical computations. There are efficient algorithms that make use of the factorization in solving symmetric positive definite linear systems.
Property V If A is a symmetric positive definite matrix, then A can be factored into a product , where L is lower triangular with 1’s along the diagonal and D is a diagonal matrix whose diagonal entries are all positive.
We saw in Example 1 that
Factoring out the diagonal entries of U, we get
Since the diagonal elements are positive, it is possible to go one step further with the factorization. Let
and set . Then
This factorization is known as the Cholesky decomposition of A.
Property VI (Cholesky Decomposition) If A is a symmetric positive definite matrix, then A can be factored into a product , where L is lower triangular with positive diagonal elements.
The Cholesky decomposition of a symmetric positive definite matrix A can also be represented in terms of an upper triangular matrix. Indeed, if A has Cholesky decomposition where L is lower triangular with positive diagonal entries, then the matrix is upper triangular with positive diagonal entries and
Let A be the matrix from Examples 1 and 2. If we set
then
The Cholesky factorization of the symmetric positive definite matrix A in Example 3 could also have been written in terms of the upper triangular matrix .
More generally, it is not difficult to show that any product of the will be positive definite, provided that B is nonsingular. Putting all these results together, we have the following theorem.
Let A be a symmetric matrix. The following are equivalent:
A is positive definite.
The leading principal submatrices all have positive determinants.
A can be reduced to upper triangular form using only row operation III, and the pivot elements will all be positive.
A has a Cholesky factorization (where L is lower triangular with positive diagonal entries).
A can be factored into a product for some nonsingular matrix B.
Proof
We have already shown that (a) implies (b), (b) implies (c), and (c) implies (d). To see that (d) implies (e), assume that . If we set , then B is nonsingular and
Finally, to show that , assume that , where B is nonsingular. Let x be any nonzero vector in and set . Since B is nonsingular, and it follows that
∎
Thus, A is positive definite.
Analogous results to Theorem 6.7.1 are not valid for positive semidefiniteness. For example, consider the matrix
The leading principal submatrices all have nonnegative determinants:
However, A is not positive semidefinite, since it has a negative eigenvalue . Indeed, is an eigenvector belonging to and
3.144.48.135