6.7 Positive Definite Matrices

In Section 6.6, we saw that a symmetric matrix is positive definite if and only if its eigenvalues are all positive. These types of matrices occur in a wide variety of applications. They frequently arise in the numerical solution of boundary value problems by finite difference methods or by finite element methods. Because of their importance in applied mathematics, we devote this section to studying their properties.

Recall that a symmetric n×n matrix A is positive definite if xTAx>0 for all nonzero vectors x in n. In Theorem 6.6.2, symmetric positive definite matrices were characterized by the condition that all their eigenvalues are positive. This characterization can be used to establish the following properties:

  • Property I If A is a symmetric positive definite matrix, then A is nonsingular.

  • Property II If A is a symmetric positive definite matrix, then det(A)>0.

If A were singular, λ=0 would be an eigenvalue of A. However, since all the eigenvalues of A are positive, A must be nonsingular. The second property also follows from Theorem 6.6.2, since

det(A)=λ1. . . λn>0

Given an n×n matrix A, let Ar denote the matrix formed by deleting the last nr rows and columns of A. Ar is called the leading principal submatrix of A of order r. We can now state a third property of positive definite matrices:

  • Property III If A is a symmetric positive definite matrix, then the leading principal submatrices A1,A2,. . . ,An of A are all positive definite.

Proof

To show that Ar is positive definite, 1rn, let xr=(x1,. . . ,xr)T be any nonzero vector in r and set

x=(x1,. . . ,xr,0,. . . ,0)T

Since

xrTArxr=xTAx>0

it follows that Ar is positive definite.

An immediate consequence of properties I, II, and III is that if Ar is a leading principal submatrix of a symmetric positive definite matrix A, then Ar is nonsingular and det(Ar)>0 This has significance in relation to the Gaussian elimination process. In general, if A is an n×n matrix whose leading principal submatrices are all nonsingular, then A can be reduced to upper triangular form using only row operation III; that is, the diagonal elements will never be 0 in the elimination process, so the reduction can be completed without interchanging rows.

  • Property IV If A is a symmetric positive definite matrix, then A can be reduced to upper triangular form using only row operation III, and the pivot elements will all be positive.

Let us illustrate property IV in the case of a 4×4 symmetric positive definite matrix A. Note first that

a11=det(A1)>0

so a11 can be used as a pivot element and row 1 is the first pivot row. Let a22(1) denote the entry in the (2, 2) position after the last three elements of column 1 have been eliminated (see Figure 6.7.1). At this step, the submatrix A2 has been transformed into a matrix:

[a11a120a22(1)]

Figure 6.7.1.

An illustration of a 4 by 4 matrix with values, a sub 11, a sub 22, a sub 33, and a sub 44, along the leading diagonal. All values except the leading diagonal are x.

Since the transformation was accomplished using only row operation III, the value of the determinant remains unchanged. Thus,

det(A2)=a11a22(1)

and hence

a22(1)=det(A2)a11=det(A2)det(A1)>0

Since a22(1)0, it can be used as a pivot in the second step of the elimination process. After step 2, the matrix A3 has been transformed into

[a11a12a130a22(1)a23(1)00a33(2)]

Because only row operation III was used,

det(A3)=a11a22(1)a33(2)

and hence

a33(2)=det(A3)a11a22(1)=det(A3)det(A2)>0

Thus, a33(2) can be used as a pivot in the last step. After step 3, the remaining diagonal entry will be

a44(3)=det(A4)det(A3)>0

In general, if an n×n matrix A can be reduced to an upper triangular form U without any interchanges of rows, then A can be factored into a product LU, where L is lower triangular with 1’s on the diagonal. The (i,j) entry of L below the diagonal will be the multiple of the ith row that was subtracted from the jth row during the elimination process. We illustrate with a 3 × 3 example:

Example 1

Let

A=[4222102225]

The matrix L is determined as follows: At the first step of the elimination process, 12 times the first row is subtracted from the second row and 12 times the first row is subtracted from the third. Corresponding to these operations, we set l21=12 and l31=12. After step 1, we obtain the matrix

A(1)=[422093034]

The final elimination is carried out by subtracting 13 times the second row from the third row. Corresponding to this step, we set l32=13. After step 2, we end up with the upper triangular matrix

U=A(2)=[422093003]

The matrix L is given by

L=[100121012131]

and we can verify that the product LU=A.

[100121012131][422093003]=[4222102225]

To see why this factorization works, let us view that process in terms of elementary matrices. Row operation III was applied three times during the process. This is equivalent to multiplying A on the left by three elementary matrices E1,E2,E3. Thus, E3E2E1A=U:

[1000100131][1000101201][1001210001][4222102225]=[422093003]

Since the elementary matrices are nonsingular, it follows that

A=(E11E21E31)U

When the inverse elementary matrices are multiplied in this order, the result is a lower triangular matrix L with 1’s on the diagonal. The entries below the diagonal of L will just be the multiples that were subtracted during the elimination process.

E11E21E31=[1001210001][1000101201][1000100131]=[100121012131]

Given an LU factorization of a matrix A, it is possible to go one step further and factor U into a product DU1, where D is diagonal and U1 is upper triangular with 1’s on the diagonal:

DU1=[u11u22unn][1u12u11u13u11u1nu111u23u22u2nu221]

It follows, then, that A=LDU1. The matrices L and U1 are referred to as unit triangular matrices since they are triangular and their diagonal entries are all equal to 1. The representation of a square matrix A as a product of the form LDU, where L is a unit lower triangular matrix, D is diagonal, and U is a unit upper triangular matrix, is referred to as an LDU factorization of A. In general, if A has an LDU factorization, then it is unique (see Exercise 8 at the end of this section).

If A is a symmetric positive definite matrix, then A can be factored into a product LU=LDU1. The diagonal elements of D are the entries u11, ,unn, which were the pivot elements in the elimination process. By property IV, these elements are all positive. Furthermore, since A is symmetric,

LDU1=A=AT=(LDU1)T=U1TDTLT

It follows from the uniqueness of the LDU factorization that LT=U1. Thus,

A=LDLT

This important factorization is often used in numerical computations. There are efficient algorithms that make use of the LDLT factorization in solving symmetric positive definite linear systems.

  • Property V If A is a symmetric positive definite matrix, then A can be factored into a product LDLT, where L is lower triangular with 1’s along the diagonal and D is a diagonal matrix whose diagonal entries are all positive.

Example 2

We saw in Example 1 that

A=[4222102225]=[100121012131][422093003]=LU

Factoring out the diagonal entries of U, we get

A=[100121012131][400090003][112120113001]=LDLT

Since the diagonal elements u11,. . . ,unn are positive, it is possible to go one step further with the factorization. Let

D1/2=[u11u22unn]

and set L1=LD1/2. Then

A=LDLT=LD1/2(D1/2)TLT=L1L1T

This factorization is known as the Cholesky decomposition of A.

  • Property VI (Cholesky Decomposition) If A is a symmetric positive definite matrix, then A can be factored into a product LLT, where L is lower triangular with positive diagonal elements.

The Cholesky decomposition of a symmetric positive definite matrix A can also be represented in terms of an upper triangular matrix. Indeed, if A has Cholesky decomposition LLT where L is lower triangular with positive diagonal entries, then the matrix R=LT is upper triangular with positive diagonal entries and

A=LLT=RTR

Example 3

Let A be the matrix from Examples 1 and 2. If we set

L1=LD1/2=[100121012131][200030003]=[200130113]

then

L1L1T=[200130113][211031003]=[4222102225]=A

The Cholesky factorization of the symmetric positive definite matrix A in Example 3 could also have been written in terms of the upper triangular matrix R=L1T.

A=L1L1T=RTR

More generally, it is not difficult to show that any product of the BTB will be positive definite, provided that B is nonsingular. Putting all these results together, we have the following theorem.

Theorem 6.7.1

Let A be a symmetric n×n matrix. The following are equivalent:

  1. A is positive definite.

  2. The leading principal submatrices A1,. . . ,An all have positive determinants.

  3. A can be reduced to upper triangular form using only row operation III, and the pivot elements will all be positive.

  4. A has a Cholesky factorization LLT (where L is lower triangular with positive diagonal entries).

  5. A can be factored into a product BTB for some nonsingular matrix B.

Proof

We have already shown that (a) implies (b), (b) implies (c), and (c) implies (d). To see that (d) implies (e), assume that A=LLT. If we set B=LT, then B is nonsingular and

A=LLT=BTB

Finally, to show that (e)(a), assume that A=BTB, where B is nonsingular. Let x be any nonzero vector in n and set y=Bx. Since B is nonsingular, y0 and it follows that

xTAx=xTBTBx=yTy=||y||2>0

Thus, A is positive definite.

Analogous results to Theorem 6.7.1 are not valid for positive semidefiniteness. For example, consider the matrix

A=[113113335]

The leading principal submatrices all have nonnegative determinants:

det(A1)=1,det(A2)=0,det(A3)=0

However, A is not positive semidefinite, since it has a negative eigenvalue λ=1. Indeed, x=(1,1,1)T is an eigenvector belonging to λ=1 and

xTAx=3
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.48.135