In Theorem 3.1, we saw that performing an elementary row operation on a matrix can be accomplished by multiplying the matrix by an elementary matrix. This result is very useful in studying the effects on the determinant of applying a sequence of elementary row operations. Because the determinant of the identity matrix is 1 (see Example 4 in Section 4.2), we can interpret the statements on page 217 as the following facts about the determinants of elementary matrices.
(a) If E is an elementary matrix obtained by interchanging any two rows of I, then .
(b) If E is an elementary matrix obtained by multiplying some row of I by the nonzero scalar k, then .
(c) If E is an elementary matrix obtained by adding a multiple of some row of I to another row, then .
We now apply these facts about determinants of elementary matrices to prove that the determinant is a multiplicative function.
For any .
We begin by establishing the result when A is an elementary matrix. If A is an elementary matrix obtained by interchanging two rows of I, then . But by Theorem 3.1 (p. 149), AB is a matrix obtained by interchanging two rows of B. Hence by Theorem 4.5 (p. 215), . Similar arguments establish the result when A is an elementary matrix of type 2 or type 3. (See Exercise 18.)
If A is an matrix with rank less than n, then by the corollary to Theorem 4.6 (p. 216). Since by Theorem 3.7 (p. 159), we have . Thus in this case.
On the other hand, if A has rank n, then A is invertible and hence is the product of elementary matrices (Corollary 3 to Theorem 3.6 p. 158), say, . The first paragraph of this proof shows that
A matrix is invertible if and only if . Furthermore, if A is invertible, then .
If is not invertible, then the rank of A is less than n. So by the corollary to Theorem 4.6 (p. 217). On the other hand, if is invertible, then
by Theorem 4.7. Hence and .
In our discussion of determinants until now, we have used only the rows of a matrix. For example, the recursive definition of a determinant involved cofactor expansion along a row, and the more efficient method developed in Section 4.2 used elementary row operations. Our next result shows that the determinants of A and are always equal. Since the rows of A are the columns of this fact enables us to translate any statement about determinants that involves the rows of a matrix into a corresponding statement that involves its columns.
For any .
If A is not invertible, then . But by Corollary 2 to Theorem 3.6 (p. 158), and so is not invertible. Thus in this case.
On the other hand, if A is invertible, then A is a product of elementary matrices, say . Since for every i by Exercise 29 of Section 4.2, by Theorem 4.7 we have
Thus, in either case, .
Among the many consequences of Theorem 4.8 are that determinants can be evaluated by cofactor expansion along a column, and that elementary column operations can be used as well as elementary row operations in evaluating a determinant. (The effect on the determinant of performing an elementary column operation is the same as the effect of performing the corresponding elementary row operation.) We conclude our discussion of determinant properties with a well-known result that relates determinants to the solutions of certain types of systems of linear equations.
(Cramer’s Rule). Let be the matrix form of a system of n linear equations in n unknowns, where . If then this system has a unique solution, and for each k ,
where is the matrix obtained from A by replacing column k of A by b.
If , then the system has a unique solution by the corollary to Theorem 4.7 and Theorem 3.10 (p. 173). For each integer k , let denote the kth column of A and denote the matrix obtained from the identity matrix by replacing column k by x. Then by Theorem 2.13 (p. 91), is the matrix whose ith column is
Thus . Evaluating by cofactor expansion along row k produces
Hence by Theorem 4.7,
Therefore
We illustrate Theorem 4.9 by using Cramer’s rule to solve the following system of linear equations:
The matrix form of this system of linear equations is where
Because Cramer’s rule applies. Using the notation of Theorem 4.9, we have
and
Thus the unique solution to the given system of linear equations is
In applications involving systems of linear equations, we sometimes need to know that there is a solution in which the unknowns are integers. In this situation, Cramer’s rule can be useful because it implies that a system of linear equations with integral coefficients has an integral solution if the determinant of its coefficient matrix is . On the other hand, Cramer’s rule is not useful for computation because it requires evaluating determinants of matrices to solve a system of n linear equations in n unknowns. The amount of computation to do this is far greater than that required to solve the system by the method of Gaussian elimination, which was discussed in Section 3.4. Thus Cramer’s rule is primarily of theoretical and aesthetic interest, rather than of computational value.
As in Section 4.1, it is possible to interpret the determinant of a matrix geometrically. If the rows of A are respectively, then is the n-dimensional volume (the generalization of area in and volume in ) of the parallelepiped having the vectors as adjacent sides. (For a proof of a more generalized result, see Jerrold E. Marsden and Michael J. Hoffman, Elementary Classical Analysis, W.H. Freeman and Company, New York, 1993, p. 524.)
The volume of the parallelepiped having the vectors and as adjacent sides is
Note that the object in question is a rectangular parallelepiped (see Figure 4.6) with sides of lengths and . Hence by the familiar formula for volume, its volume should be as the determinant calculation shows.
In our earlier discussion of the geometric significance of the determinant formed from the vectors in an ordered basis for , we also saw that this determinant is positive if and only if the basis induces a right-handed coordinate system. A similar statement is true in . Specifically, if is any ordered basis for and is the standard ordered basis for , then induces a right-handed coordinate system if and only if , where Q is the change of coordinate matrix changing -coordinates into -coordinates. Thus, for instance,
induces a left-handed coordinate system in because
whereas
induces a right-handed coordinate system in because
More generally, if and are two ordered bases for then the coordinate systems induced by and have the same orientation (either both are right-handed or both are left-handed) if and only if where Q is the change of coordinate matrix changing -coordinates into -coordinates.
Label the following statements as true or false.
(a) If E is an elementary matrix, then .
(b) For any .
(c) A matrix is invertible if and only if .
(d) A matrix has rank n if and only if .
(e) For any .
(f) The determinant of a square matrix can be evaluated by cofactor expansion along any column.
(g) Every system of n linear equations in n unknowns can be solved by Cramer’s rule.
(h) Let be the matrix form of a system of n linear equations in n unknowns, where If and if is the matrix obtained from A by replacing row k of A by then the unique solution of is
In Exercises 2-7, use Cramer’s rule to solve the given system of linear equations.
where
Use Theorem 4.8 to prove a result analogous to Theorem 4.3 (p. 212), but for columns.
Prove that an upper triangular matrix is invertible if and only if all its diagonal entries are nonzero.
A matrix is called nilpotent if, for some positive integer k, , where O is the zero matrix. Prove that if M is nilpotent, then .
A matrix is called skew-symmetric if . Prove that if M is skew-symmetric and n is odd, then M is not invertible. What happens if n is even?
A matrix is called orthogonal if . Prove that if Q is orthogonal, then .
For , let be the matrix such that for all i, j, where is the complex conjugate of .
(a) Prove that .
(b) A matrix is called unitary if , where . Prove that if Q is a unitary matrix, then .
Let be a subset of containing n distinct vectors, and let B be the matrix in having as column j. Prove that is a basis for if and only if .
† Prove that if are similar, then .
Use determinants to prove that if are such that , then A is invertible (and hence ).
Let be such that . Prove that if n is odd and F is not a field of characteristic two, then A or B is not invertible.
Complete the proof of Theorem 4.7 by showing that if A is an elementary matrix of type 2 or type 3, then .
A matrix is called lower triangular if for . Suppose that A is a lower triangular matrix. Describe det(A) in terms of the entries of A.
Suppose that can be written in the form
where A is a square matrix. Prove that .
† Prove that if can be written in the form
where A and C are square matrices, then . Visit goo.gl/
Let T: be the linear transformation defined in Exercise 22 of Section 2.4 by where are distinct scalars in an infinite field F. Let be the standard ordered basis for and be the standard ordered basis for .
(a) Show that has the form
A matrix with this form is called a Vandermonde matrix.
(b) Use Exercise 22 of Section 2.4 to prove that .
(c) Prove that
the product of all terms of the form for .
Let be nonzero. For any an submatrix is obtained by deleting any rows and any columns of A.
(a) Let denote the largest integer such that some submatrix has a nonzero determinant. Prove that .
(b) Conversely, suppose that . Prove that there exists a submatrix with a nonzero determinant.
Let have the form
Compute where I is the identity matrix.
Let denote the cofactor of the row j, column k entry of the matrix .
(a) Prove that if B is the matrix obtained from A by replacing column k by then .
(b) Show that for we have
Hint: Apply Cramer’s rule to .
(c) Deduce that if C is the matrix such that then .
(d) Show that if then .
The following definition is used in Exercises 26-27.
The classical adjoint of a square matrix A is the transpose of the matrix whose ij-entry is the ij-cofactor of A.
Find the classical adjoint of each of the following matrices.
Let C be the classical adjoint of . Prove the following statements.
.
is the classical adjoint of .
If A is an invertible upper triangular matrix, then C and are both upper triangular matrices.
Let be linearly independent functions in . For each , define by
The preceding determinant is called the Wronskian of .
Prove that T: is a linear transformation.
Prove that N(T) contains .
3.139.97.40