In this section, we extend the definition of the determinant to matrices for For this definition, it is convenient to introduce the following notation: Given for denote the matrix obtained from A by deleting row i and column j by Thus for
we have
and for
we have
Let If so that we define For we define det(A) recursively as
The scalar det(A) is called the determinant of A and is also denoted by The scalar
is called the cofactor of the entry of A in row i, column j.
Letting
denote the cofactor of the row i, column j entry of A, we can express the formula for the determinant of A as
Thus the determinant of A equals the sum of the products of each entry in row 1 of A multiplied by its cofactor. This formula is called cofactor expansion along the first row of A. Note that, for matrices, this definition of the determinant of A agrees with the one given in Section 4.1 because
Let
Using cofactor expansion along the first row of A, we obtain
Let
Using cofactor expansion along the first row of B, we obtain
Let
Using cofactor expansion along the first row of C and the results of Examples 1 and 2, we obtain
The determinant of the identity matrix is 1. We prove this assertion by mathematical induction on n. The result is clearly true for the identity matrix. Assume that the determinant of the identity matrix is 1 for some and let I denote the identity matrix. Using cofactor expansion along the first row of I, we obtain
because is the identity matrix. This shows that the determinant of the identity matrix is 1, and so the determinant of any identity matrix is 1 by the principle of mathematical induction.
As is illustrated in Example 3, the calculation of a determinant using the recursive definition is extremely tedious, even for matrices as small as . Later in this section, we present a more efficient method for evaluating determinants, but we must first learn more about them.
Recall from Theorem 4.1 (p. 200) that, although the determinant of a matrix is not a linear transformation, it is a linear function of each row when the other row is held fixed. We now show that a similar property is true for determinants of any size.
The determinant of an matrix is a linear function of each row when the remaining rows are held fixed. That is, for we have
whenever k is a scalar and u, v, and each are row vectors in .
The proof is by mathematical induction on n. The result is immediate if Assume that for some integer the determinant of any matrix is a linear function of each row when the remaining rows are held fixed. Let A be an matrix with rows respectively, and suppose that for some we have for some u, and some scalar k. Let and and let B and C be the matrices obtained from A by replacing row r of A by u and v, respectively. We must prove that We leave the proof of this fact to the reader for the case For and the rows of and are the same except for row Moreover, row of is
which is the sum of row of and k times row of Since and are matrices, we have
by the induction hypothesis. Thus since we have
This shows that the theorem is true for matrices, and so the theorem is true for all square matrices by mathematical induction.
If has a row consisting entirely of zeros, then .
See Exercise 24.
The definition of a determinant requires that the determinant of a matrix be evaluated by cofactor expansion along the first row. Our next theorem shows that the determinant of a square matrix can be evaluated by cofactor expansion along any row. Its proof requires the following technical result.
Let where If row i of B equals for some then .
The proof is by mathematical induction on n. The lemma is easily proved for Assume that for some integer the lemma is true for matrices, and let B be an matrix in which row i of B equals for some The result follows immediately from the definition of the determinant if Suppose therefore that For each let denote the matrix obtained from B by deleting rows 1 and i and columns j and k. For each j, row of is the following vector in :
Hence by the induction hypothesis and the corollary to Theorem 4.3, we have
Therefore
Because the expression inside the preceding bracket is the cofactor expansion of along the first row, it follows that
This shows that the lemma is true for matrices, and so the lemma is true for all square matrices by mathematical induction.
We are now able to prove that cofactor expansion along any row can be used to evaluate the determinant of a square matrix.
The determinant of a square matrix can be evaluated by cofactor expansion along any row. That is, if then for any integer ,
Cofactor expansion along the first row of A gives the determinant of A by definition. So the result is true if Fix Row i of A can be written as For let denote the matrix obtained from A by replacing row i of A by Then by Theorem 4.3 and the lemma, we have
If has two identical rows, then .
The proof is by mathematical induction on n. We leave the proof of the result to the reader in the case that Assume that for some integer it is true for matrices, and let rows r and s of be identical for Because we can choose an integer other than r and s. Now
by Theorem 4.4. Since each is an matrix with two identical rows, the induction hypothesis implies that each and hence This completes the proof for matrices, and so the lemma is true for all square matrices by mathematical induction.
It is possible to evaluate determinants more efficiently by combining cofactor expansion with the use of elementary row operations. Before such a process can be developed, we need to learn what happens to the determinant of a matrix if we perform an elementary row operation on that matrix. Theorem 4.3 provides this information for elementary row operations of type 2 (those in which a row is multiplied by a nonzero scalar). Next we turn our attention to elementary row operations of type 1 (those in which two rows are interchanged).
If and B is a matrix obtained from A by interchanging any two rows of A, then .
Let the rows of be and let B be the matrix obtained from A by interchanging rows r and s, where Thus
Consider the matrix obtained from A by replacing rows r and s by By the corollary to Theorem 4.4 and Theorem 4.3, we have
Therefore .
We now complete our investigation of how an elementary row operation affects the determinant of a matrix by showing that elementary row operations of type 3 do not change the determinant of a matrix.
Let and let B be a matrix obtained by adding a multiple of one row of A to another row of A. Then .
Suppose that B is the matrix obtained from A by adding k times row r to row s, where . Let the rows of A be and the rows of B be Then for and . Let C be the matrix obtained from A by replacing row s with . Applying Theorem 4.3 to row s of B, we obtain
because by the corollary to Theorem 4.4.
In Theorem 4.2 (p. 201), we proved that a matrix is invertible if and only if its determinant is nonzero. As a consequence of Theorem 4.6, we can prove half of the promised generalization of this result in the following corollary. The converse is proved in the corollary to Theorem 4.7.
If has rank less than n, then .
If the rank of A is less than n, then the rows of A are linearly dependent. By Exercise 14 of Section 1.5, some row of A, say, row r, is a linear combination of the other rows. So there exist scalars such that
Let B be the matrix obtained from A by adding times row i to row r for each . Then row r of B consists entirely of zeros, and so . But by Theorem 4.6, . Hence .
The following rules summarize the effect of an elementary row operation on the determinant of a matrix .
(a) If B is a matrix obtained by interchanging any two rows of A, then .
(b) If B is a matrix obtained by multiplying a row of A by a nonzero scalar k, then .
(c) If B is a matrix obtained by adding a multiple of one row of A to another row of A, then .
These facts can be used to simplify the evaluation of a determinant. Consider, for instance, the matrix in Example 1:
Adding 3 times row 1 of A to row 2 and 4 times row 1 to row 3, we obtain
Since M was obtained by performing two type 3 elementary row operations on A, we have . The cofactor expansion of M along the first row gives
Both and have a column consisting entirely of zeros, and so by the corollary to Theorem 4.6. Hence
Thus with the use of two elementary row operations of type 3, we have reduced the computation of det(A) to the evaluation of one determinant of a matrix.
But we can do even better. If we add times row 2 of M to row 3 (another elementary row operation of type 3), we obtain
Evaluating det(P) by cofactor expansion along the first row, we have
as described earlier. Since , it follows that .
The preceding calculation of det(P) illustrates an important general fact. The determinant of an upper triangular matrix is the product of its diagonal entries. (See Exercise 23.) By using elementary row operations of types 1 and 3 only, we can transform any square matrix into an upper triangular matrix, and so we can easily evaluate the determinant of any square matrix. The next two examples illustrate this technique.
To evaluate the determinant of the matrix
in Example 2, we must begin with a row interchange. Interchanging rows 1 and 2 of B produces
By means of a sequence of elementary row operations of type 3, we can transform C into an upper triangular matrix:
Thus . Since C was obtained from B by an interchange of rows, it follows that
The technique in Example 5 can be used to evaluate the determinant of the matrix
in Example 3. This matrix can be transformed into an upper triangular matrix by means of the following sequence of elementary row operations of type 3:
Thus
Using elementary row operations to evaluate the determinant of a matrix, as illustrated in Example 6, is far more efficient than using cofactor expansion. Consider first the evaluation of a matrix. Since
the evaluation of the determinant of a matrix requires 2 multiplications (and 1 subtraction). For evaluating the determinant of an matrix by cofactor expansion along any row expresses the determinant as a sum of n products involving determinants of matrices. Thus in all, the evaluation of the determinant of an matrix by cofactor expansion along any row requires over n! multiplications, whereas evaluating the determinant of an matrix by elementary row operations as in Examples 5 and 6 can be shown to require only multiplications. To evaluate the determinant of a matrix, which is not large by present standards, cofactor expansion along a row requires over multiplications. Thus it would take a computer performing one billion multiplications per second over 77 years to evaluate the determinant of a matrix by this method. By contrast, the method using elementary row operations requires only 2679 multiplications for this calculation and would take the same computer less than three-millionths of a second! It is easy to see why most computer programs for evaluating the determinant of an arbitrary matrix do not use cofactor expansion.
In this section, we have defined the determinant of a square matrix in terms of cofactor expansion along the first row. We then showed that the determinant of a square matrix can be evaluated using cofactor expansion along any row. In addition, we showed that the determinant possesses a number of special properties, including properties that enable us to calculate det(B) from det(A) whenever B is a matrix obtained from A by means of an elementary row operation. These properties enable us to evaluate determinants much more efficiently. In the next section, we continue this approach to discover additional properties of determinants.
Label the following statements as true or false.
(a) The function det: is a linear transformation.
(b) The determinant of a square matrix can be evaluated by cofactor expansion along any row.
(c) If two rows of a square matrix A are identical, then .
(d) If B is a matrix obtained from a square matrix A by interchanging any two rows, then .
(e) If B is a matrix obtained from a square matrix A by multiplying a row of A by a scalar, then .
(f) If B is a matrix obtained from a square matrix A by adding k times row i to row j, then .
(g) If has rank n, then .
(h) The determinant of an upper triangular matrix equals the product of its diagonal entries.
Find the value of k that satisfies the following equation:
Find the value of k that satisfies the following equation:
Find the value of k that satisfies the following equation:
In Exercises 5-12, evaluate the determinant of the given matrix by cofactor expansion along the indicated row.
along the first row
along the first row
along the second row
along the third row
along the third row
along the second row
along the fourth row
along the fourth row
In Exercises 13-22, evaluate the determinant of the given matrix by any legitimate method.
Prove that the determinant of an upper triangular matrix is the product of its diagonal entries.
Prove the corollary to Theorem 4.3.
Prove that for any .
Let . Under what conditions is ?
Prove that if has two identical columns, then .
Compute if is an elementary matrix of type i.
† Prove that if E is an elementary matrix, then . Visit goo.gl/
Let the rows of be and let B be the matrix in which the rows are . Calculate det(B) in terms of det(A).
3.138.119.106