With each matrix A, it is possible to associate a scalar, det(A), whose value will tell us whether the matrix is nonsingular. Before proceeding to the general definition, let us consider the following cases.
Case 1. Matrices If is a matrix, then A will have a multiplicative inverse if and only if . Thus, if we define
then A will be nonsingular if and only if det.
Case 2. Matrices Let
By Theorem 1.5.2, A will be nonsingular if and only if it is row equivalent to I. Then, if , we can test whether A is row equivalent to I by performing the following operations:
Multiply the second row of A by :
Subtract times the first row from the new second row:
Since , the resulting matrix will be row equivalent to I if and only if
If , we can switch the two rows of A. The resulting matrix
will be row equivalent to I if and only if . This requirement is equivalent to condition (1) when . Thus, if A is any matrix and we define
then A is nonsingular if and only if .
We can refer to the determinant of a specific matrix by enclosing the array between vertical lines. For example, if
then
represents the determinant of A.
Case 3. Matrices We can test whether a matrix is nonsingular by performing row operations to see if the matrix is row equivalent to the identity matrix I. To carry out the elimination in the first column of an arbitrary matrix A, let us first assume that . The elimination can then be performed by subtracting times the first row from the second and times the first row from the third:
The matrix on the right will be row equivalent to I if and only if
Although the algebra is somewhat messy, this condition can be simplified to
Thus, if we define
then, for the case , the matrix will be nonsingular if and only if . What if Consider the following possibilities:
In case (i), one can show that A is row equivalent to I if and only if
But this condition is the same as condition (2) with . The details of case (i) are left as an exercise for the reader (see Exercise 7 at the end of the section).
In case (ii), it follows that
is row equivalent to I if and only if
Again, this is a special case of condition (2) with .
Clearly, in case (iii) the matrix A cannot be row equivalent to I and hence must be singular. In this case, if we set , , and equal to 0 in formula (3), the result will be .
In general, then, formula (2) gives a necessary and sufficient condition for a matrix A to be nonsingular (regardless of the value of ).
We would now like to define the determinant of an matrix. To see how to do this, note that the determinant of a matrix
can be defined in terms of the two matrices
The matrix is formed from A by deleting its first row and first column, and is formed from A by deleting its first row and second column.
The determinant of A can be expressed in the form
For a matrix A, we can rewrite equation (3) in the form
For , 2, 3, let denote the matrix formed from A by deleting its first row and jth column. The determinant of A can then be represented in the form
where
To see how to generalize (4) and (5) to the case n > 3, we introduce the following definition.
In view of this definition, for a matrix A, we may rewrite equation (4) in the form
Equation (6) is called the cofactor expansion of det(A) along the first row of A. Note that we could also write
Equation (7) expresses det(A) in terms of the entries of the second row of A and their cofactors. Actually, there is no reason that we must expand along a row of the matrix; the determinant could just as well be represented by the cofactor expansion along one of the columns:
For a matrix A, we have
Thus, the determinant of a matrix can be defined in terms of the elements in the first row of the matrix and their corresponding cofactors.
If
then
As in the case of matrices, the determinant of a matrix can be represented as a cofactor expansion using any row or column. For example, equation (3) can be rewritten in the form
This is the cofactor expansion along the third row of A.
Let A be the matrix in Example 1. The cofactor expansion of det(A) along the second column is given by
The determinant of a matrix can be defined in terms of a cofactor expansion along any row or column. To compute the value of the determinant, we would have to evaluate four determinants.
As we have seen, it is not necessary to limit ourselves to using the first row for the cofactor expansion. We state the following theorem without proof:
If A is an matrix with , then det(A) can be expressed as a cofactor expansion using any row or column of A:
for , n and .
The cofactor expansion of a determinant will involve four determinants. We can often save work by expanding along the row or column that contains the most zeros. For example, to evaluate
we would expand down the first column. The first three terms will drop out, leaving
For , we have seen that an matrix A is nonsingular if and only if . In the next section, we will show that this result holds for all values of n. In that section, we also look at the effect of row operations on the value of the determinant, and we will make use of row operations to derive a more efficient method for computing the value of a determinant.
We close this section with three theorems that are consequences of the cofactor expansion definition. The proofs of the last two theorems are left for the reader (see Exercises 8, 9, and 10 at the end of this section).
If A is an matrix, then .
Proof
The proof is by induction on n. Clearly, the result holds if , since a matrix is necessarily symmetric. Assume that the result holds for all matrices and that A is a matrix. Expanding det(A) along the first row of A, we get
Since the are all matrices, it follows from the induction hypothesis that
The right-hand side of (9) is just the expansion by minors of using the first column of . Therefore,
∎
If A is an triangular matrix, then the determinant of A equals the product of the diagonal elements of A.
Proof
In view of Theorem 2.1.2, it suffices to prove the theorem for lower triangular matrices. The result follows easily using the cofactor expansion and induction on n. The details are left for the reader (see Exercise 8 at the end of the section).
∎
Let A be an matrix.
If A has a row or column consisting entirely of zeros, then .
If A has two identical rows or two identical columns, then .
Both of these results can be easily proved with the use of the cofactor expansion. The proofs are left for the reader (see Exercises 9 and 10).
∎
In the next section, we look at the effect of row operations on the value of the determinant. This will allow us to make use of Theorem 2.1.3 to derive a more efficient method for computing the value of a determinant.
18.118.1.158