4.5 Row and Column Spaces

In numerous examples we have observed the phenomenon of “disappearing equations” that sometimes occurs when we solve a linear system using the method of Gaussian elimination. The appearance in this process of a trivial equation 0=00=0 means that one of the original equations was redundant. For instance, in the system

x2y+2z=0x+4y+3z=02x+2y+5z=0,
xx2x++2y4y2y+++2z3z5z===000,

the third equation provides no additional information about a solution (x, y, z) because it is merely the sum of the first two equations.

Given a homogeneous linear system, it is natural to ask how many of the equations are “irredundant,” and which ones they are. We will see that an answer to this question leads to a natural and simple relation between the number of irredundant equations, the number of unknowns, and the number of linearly independent solutions.

Row Space and Row Rank

The individual equations of the homogeneous linear system Ax=0Ax=0 correspond to the “row matrices”

[a11a12a1n][a21a22a2n][am1am2amn]
[a11a12a1n][a21a22a2n][am1am2amn]

of the m×nm×n matrix A=[aij].A=[aij]. The row vectors of A are the m vectors

r1=(a11,a12,,a1n)r2=(a21,a22,,a2n)rm=(am1,am2,,amn)
r1r2rm===(a11,a12,,a1n)(a21,a22,,a2n)(am1,am2,,amn)
(1)

in RnRn. Recalling from Section 3.4 the convection that n-tuples denote column-vector elements of RnRn, we see that the row vectors of A are the transposes of its row matrices; that is,

ri=[ai1ai2ain]=[ai1ai2ain]T
ri=ai1ai2ain=[ai1ai2ain]T

for i=1,2,,mi=1,2,,m, and thus actually are column vectors (despite being called “row vectors”). The subspace of RnRn spanned by the m row vectors r1,r2,,rmr1,r2,,rm is called the row space Row(A) of the matrix A. The dimension of the row space Row(A) is called the row rank of the matrix A.

Example 1

Consider the 4×54×5 echelon matrix

A=[13253001420001700000].
A=10003000210054103270.

Its row vectors are r1=(1,3,2,5,3),r2=(0,0,1,4,2),r3=(0,0,0,1,7),r1=(1,3,2,5,3),r2=(0,0,1,4,2),r3=(0,0,0,1,7), and r4=(0,0,0,0,0).r4=(0,0,0,0,0). We want to show that the row vectors that are not zero are linearly independent. To this end we calculate the linear combination

c1r1+c2r2+c3r3=(c1,3c1,2c1+c2,5c14c2+c3,3c1+2c2+7c3).
c1r1+c2r2+c3r3=(c1,3c1,2c1+c2,5c14c2+c3,3c1+2c2+7c3).

If c1r1+c2r2+c3r3=0c1r1+c2r2+c3r3=0, then the first component gives c1=0c1=0, next the third component 2c1+c2=02c1+c2=0 yields c2=0c2=0, and finally the fourth component 5c14c2+c3=05c14c2+c3=0 yields c3=0c3=0. Thus the vectors r1, r2r1, r2, and r3r3 are linearly independent and therefore form a basis for the row space Row(A). Hence Row(A) is a 3-dimensional subspace of R5R5, and the row rank of A is 3.

It should be apparent that the method used in Example 1 applies to any echelon matrix E=[eij]E=[eij]. If the first column of E isnot all zero, then its nonzero row vectors r1, r2r1, r2, , rk, rk have the forms

r1=(e11,,e1p,,e1q,),r2=(0,,0,e2p,,e2q,),r3=(0,,0,,e3q,),
r1r2r3===(e11,,(0,,0,(0,,e1p,,e2p,,0,,e1q,),e2q,),e3q,),

and so forth (where e11, e2pe11, e2p, and e3qe3q denote nonzero leading entries). Then the equation

c1r1+c2r2++ckrk=0
c1r1+c2r2++ckrk=0

implies that

c1e11=0,c1e1p+c2e2p=0,c1e1q+c2e2q+c3e3q=0,
c1e11=0,c1e1p+c2e2p=0,c1e1q+c2e2q+c3e3q=0,

and so forth. We can therefore conclude in turn that c1=0c1=0, c2=0, , ck=0c2=0, , ck=0. Thus the row vectors r1,r2,,rkr1,r2,,rk are linearly independent, and hence we have the following result.

We investigate the row space of an arbitrary matrix A by reducing A to an echelon matrix E. The following theorem then guarantees that A and E have the same row space. Recall that two matrices are (row) equivalent provided that each can be transformed to the other by elementary row operations.

Theorems 1 and 2 together provide a routine procedure for finding a basis for the row space of a given matrix, thereby determining its row rank.

Example 2

To find a basis for the row space of the matrix

A=[12132349072351822835],
A=13222432195830132785,
(2)

we reduce it to its echelon form

E=[12132013540001700000].
E=10002100130035102470.
(3)

Then the nonzero row vectors v1=(1, 2, 1, 3, 2), v2=(0, 1,3, 5,4)v1=(1, 2, 1, 3, 2), v2=(0, 1,3, 5,4), and v3=(0,0,0,1,7)v3=(0,0,0,1,7) form a basis for the row space of A. Thus Row(A) is a 3-dimensional subspace of R5R5 and the row rank of A is 3.

In Example 2, note that Algorithm 1 does not tell us that the first three row vectors of the matrix A itself are linearly independent. We know that some three row vectors of A span Row(A), but without further investigation we do not know which three.

Column Space and Column Rank

Now we turn our attention from row vectors to column vectors. Given an m×nm×n matrix A=[aij]A=[aij], the column vectors of A are the n vectors

c1=[a11a21am1],c2=[a12a22am2], ,cn=[a1na2namn]
c1=a11a21am1,c2=a12a22am2, ,cn=a1na2namn
(4)

in RmRm. The subspace of RmRm spanned by the n column vectors c1, c2, , cnc1, c2, , cn is called the column space Col(A) of the matrix A. The dimension of the space Col(A) is called the column rank of the matrix A.

Example 3

Consider the 4×54×5 echelon matrix

E=[12132013540001700000].
E=10002100130035102470.
(5)

Its five column vectors c1, c2, c3, c4, c5c1, c2, c3, c4, c5 all lie in the subspace R3={(x1, x2, x3, 0)}R3={(x1, x2, x3, 0)} of R4R4. The column vectors that contain the leading entries in the nonzero rows of E are

c1=[1000],c2=[2100],andc4=[3510].
c1=1000,c2=2100,andc4=3510.
(6)

Note that

a1c1+a2c2+a4c4=(a1+2a2+3a4, a2+5a4, a4, 0).
a1c1+a2c2+a4c4=(a1+2a2+3a4, a2+5a4, a4, 0).

Hence a1c1+a2c2+a4c4=0a1c1+a2c2+a4c4=0 readily implies that a1=a2=a4=0a1=a2=a4=0. Thus the vectors c1, c2c1, c2, and c4c4 are linearly independent and therefore form a basis for R3={(x1, x2, x3, 0)}R3={(x1, x2, x3, 0)}. Hence Col(E) is a 3-dimensional subspace of R4R4 and the column rank of E is 3.

To find the column rank of an arbitrary m×nm×n matrix A, we begin in the same way as when finding the row rank. First we use elementary row operations to reduce A to an echelon matrix E. But the relationship between the column spaces of A and E is far more subtle than between their row spaces; the reason is that elementary row operations generally do not preserve column spaces. (See Problem 36.)

To analyze this situation, let us denote by c1, c2, , cnc1, c2, , cn the column vectors of the original matrix A, and by c1, c2, , cnc1, c2, , cn the column vectors of the echelon matrix to which we have reduced A. Suppose that E has k nonzero rows. Then each column vector of E has the form cj=(, , , 0, , 0)cj=(, , , 0, , 0) with mkmk final zeros. Hence the column space Col(E) is contained in RkRk (considered as the subspace of RmRm for which xk+1==xm=0xk+1==xm=0).

We are particularly interested in those columns of E that contain the nonzero leading entries in the nonzero rows of E. These k columns are called the pivot columns of E. The echelon matrix E has the general form

E=[d10d2000d30000dk00000000000000],
E=d100000d200000000d3000dk000000,
(7)

where d1, d2, , dkd1, d2, , dk are the nonzero leading entries. Hence the k pivot column vectors of E look like

[d100000],[d20000],[d3000], , [dk00].
d100000,d20000,d3000, , dk00.
(8)

Because of the upper triangular pattern visible here, it should be apparent that the argument in Example 3 can be used to show that these k pivot vectors form a basis for Col(E). Thus the column rank of the echelon matrix E is equal to the number k of its nonzero rows, and hence is equal to its row rank.

Now comes the subtle part. We want to show that the column rank of the original matrix A also is k. Note first that the homogeneous linear systems

Ax=0andEx=0
Ax=0andEx=0
(9)

have the same solution set because the matrices A and E are equivalent. If x=(x1, x2, , xn)x=(x1, x2, , xn), then the left-hand sides in (9) are linear combinations of column vectors:

Ax=x1c1+x2c2++xncn,Ex=x1c1+x1c2++xncn.
AxEx==x1c1+x2c2++xncn,x1c1+x1c2++xncn.
(10)

Because x satisfies either both or neither of the systems in (9), it follows that

x1c1+x2c2++xncn=0
x1c1+x2c2++xncn=0

if and only if

x1c1+x1c2++xncn=0.
x1c1+x1c2++xncn=0.

Hence every linear dependence between column vectors of E is mirrored in a linear dependence with the same coefficients between the corresponding column vectors of A. In particular, because the k pivot column vectors of E are linearly independent, it follows that the corresponding k column vectors of A are linearly independent. And because the k pivot column vectors of E span Col(E), it follows that the corresponding k column vectors of A span Col(A). Thus the latter k vectors form a basis for the column space of A, and the column rank of A is k. Consequently we have established the following method for finding a basis for the column space of a given matrix.

Remark

Since the pivot column vectors of the matrix A—that is, the column vectors corresponding to the pivot column vectors of the echelon matrix E—form a basis for the column space of A, it follows that every non-pivot column vector of A is a linear combination of its pivot column vectors.

Example 4

Consider again the 4×54×5 matrix

A=[12132349072351822835]
A=13222432195830132785

of Example 2, in which we reduced it to the echelon matrix

E=[12132013540001700000].
E=10002100130035102470.

The pivot columns of E are its first, second, and fourth columns, so the 3-dimensional column space of A has a basis consisting of its first, second, and fourth column vectors c1=(1, 3, 2, 2), c2=(2, 4, 3, 2)c1=(1, 3, 2, 2), c2=(2, 4, 3, 2), and c4=(3, 0, 1, 3)c4=(3, 0, 1, 3).

The fact that Algorithm 2 provides a basis for Col(A) consisting of column vectors of A itself (rather than the echelon matrix E) enables us to apply it to the problem of extracting a maximal linearly independent subset from a given set of vectors in RnRn.

Example 5

Find a subset of the vectors v1=(1, 1, 2, 2), v2=(3, 4, 1, 2), v3=(0, 1, 7, 4)v1=(1, 1, 2, 2), v2=(3, 4, 1, 2), v3=(0, 1, 7, 4), and v4=(5, 7, 4,2)v4=(5, 7, 4,2) that forms a basis for the subspace W of R4R4 spanned by these four vectors.

Solution

It is not clear at the outset whether W is 2-dimensional, 3-dimensional, or 4-dimensional. To apply Algorithm 2, we arrange the given vectors as the column vectors of the matrix

A=[1305141721742242],
A=1122341201745742,

which reduces readily to the echelon matrix

E=[1305011200000000].
E=1000310001005200.

The pivot columns of E are its first two columns, so the first two column vectors v1=(1, 1, 2, 2)v1=(1, 1, 2, 2) and v2=(3, 4, 1, 2)v2=(3, 4, 1, 2) of A form a basis for the column space W. In particular, we see that W is 2-dimensional. (You can confirm that v3=3v1+v2v3=3v1+v2 and v4=v1+2v2v4=v1+2v2.)

Rank and Dimension

We have seen that the row rank and the column rank of an echelon matrix E both equal the number of rows of E that are not all zeros. But if A is any matrix that reduces by row operations to the echelon matrix E, then—according to Algorithm 1—the row rank of A is equal to the row rank of E, while Algorithm 2 implies that the column rank of A is equal to the column rank of E. We therefore have the following fundamental result.

For instance, if A is a 5×75×7 matrix and Row(A) is a 3-dimensional subspace of R7R7, then Theorem 3 tells us that Col(A) must be a 3-dimensional subspace of R5R5.

To see what an extraordinary theorem this is, think of a random 11×1711×17 matrix A, possibly printed by a computer that uses a random number generator to produce independently the 187 entries in A. If it turns out that precisely 9 (no more) of the 11 row vectors of A are linearly independent, then Theorem 3 implies that precisely 9 (no more) of the 17 column vectors of A are linearly independent.

The common value of the row rank and the column rank of the matrix A is simply called the rank of A and is denoted by rank(A). The rank of A provides a meaning for the number of irredundant equations in the homogeneous linear system

Ax=0.
Ax=0.
(11)

Indeed, the individual scalar equations in (11) correspond to the column vectors of the transpose matrix ATAT, whose rank r equals that of A (Problem 25). Then the r columns vectors of ATAT that form a basis for Col(AT)Col(AT) correspond to the r equations in (11) that are irredundant; the remaining equations are linear combinations of these irredundant ones and hence are redundant.

The solution space of the homogeneous system Ax=0Ax=0 is sometimes called the null space of A, denoted by Null(A). Note that if A is an m×nm×n matrix, then Null(A) and Row(A) are subspaces of RnRn, whereas Col(A) is a subspace of RmRm. If r=rank(A)r=rank(A), then the r column vectors of A that form a basis for Col(A) correspond to the r leading variables in a solution of Ax=0Ax=0 by Gaussian elimination. Moreover, we know from the algorithm in Section 4.4 that the dimension of the solution space Null(A) is equal to the number nrnr of free variables. We therefore obtain the important identity

rank(A)+dim Null(A)=n
rank(A)+dim Null(A)=n
(12)

for any m×nm×n matrix A. For instance, if A is the matrix of Example 4 with rank 3 and n=5n=5, then the dimension of the null space of A is 53=253=2.

For another typical application of Equation (12), consider a homogeneous system of five linear equations in seven unknowns. If the 5×75×7 coefficient matrix of the system has rank 3, so only 3 of the 5 equations are irredundant, then (because n=7n=7) the system has 73=473=4 linearly independent solutions.

If A is an m×nm×n matrix with rank m, then Equation (12) implies that the system Ax=0Ax=0 has nmnm linearly independent solutions. Thus rank(A)=mrank(A)=m is the condition under which the “conventional wisdom” about the relation between the numbers of equations, unknowns, and solutions is valid.

Nonhomogeneous Linear Systems

It follows immediately from Equation (10) and the definition of Col(A) that the nonhomogeneous linear system Ax=bAx=b is consistent if and only if the vector b is in the column space of A. In the problems below we outline some of the applications to nonhomogeneous systems of the results in this section.

If we can find a single particular solution x0x0 of the nonhomogeneous system

Ax=b,
Ax=b,
(13)

then the determination of its solution set reduces to solving the associated homogeneous system

Ax=0.
Ax=0.
(14)

For then Problem 29 in Section 4.2 implies that the solution set of (13) is the set of all vectors x of the form

x=x0+xh,
x=x0+xh,

where xhxh is a solution of (14). Hence if x1, x2, , xrx1, x2, , xr is a basis for the solution space of (14), then the solution set of the nonhomogeneous system is the set of all vectors of the form

x=x0+c1x1+c2x2++crxr.
x=x0+c1x1+c2x2++crxr.

We may describe this solution set as the translate by the vector x0x0 of the r-dimensional solution space of the corresponding homogeneous system.

4.5 Problems

In Problems 1–12, find both a basis for the row space and a basis for the column space of the given matrix A.

  1. [123159252]112255392

     

  2. [524211415]524211415

     

  3. [1437211712311]1214123137711

     

  4. [13952141113313]12131394351113

     

  5. [11113134251112]13211513111412

     

  6. [1492226327163]1224279616233

     

  7. [1117145161331325423]1112143515347161323

     

  8. [1235149213712263]1112243239765213

     

  9. [13392748275122832]12223778345398122

     

  10. [12313134362243521323]11222321344313323653

     

  11. [11331237822378331754]12231331377738851234

     

  12. [11330102112378124067]11221034327031860117

In Problems 13–16, a set S of vectors in R4R4 is given. Find a subset of S that forms a basis for the subspace of R4R4 spanned by S.

  1. v1=(1, 3, 2, 4), v2=(2, 1, 3, 2), v3=(5, 1, 4, 8)v1=(1, 3, 2, 4), v2=(2, 1, 3, 2), v3=(5, 1, 4, 8)

     

  2. v1=(1, 1, 2, 3), v2=(2, 3, 4, 1), v3=(1, 1, 2, 1), v4=(4, 1, 8, 7)v1=(1, 1, 2, 3), v2=(2, 3, 4, 1), v3=(1, 1, 2, 1), v4=(4, 1, 8, 7)

     

  3. v1=(3, 2, 2, 2), v2=(2, 1, 2, 1), v3=(4, 3, 2, 3), v4=(1, 2, 3, 4)v1=(3, 2, 2, 2), v2=(2, 1, 2, 1), v3=(4, 3, 2, 3), v4=(1, 2, 3, 4)

     

  4. v1=(5, 4, 2, 2), v2=(3, 1, 2, 3), v3=(7, 7, 2, 1), v4=(1, 1, 2, 4), v5=(5, 4, 6, 7)v1=(5, 4, 2, 2), v2=(3, 1, 2, 3), v3=(7, 7, 2, 1), v4=(1, 1, 2, 4), v5=(5, 4, 6, 7)

Let S={v1, v2, , vk}S={v1, v2, , vk} be a basis for the subspace W of RnRn. Then a basis T for RnRn that contains S can be found by applying the method of Example 5 to the vectors

v1, v2, , vk, e1, e2, , en.
v1, v2, , vk, e1, e2, , en.

Do this in Problems 17–20.

  1. Find a basis T for R3R3 that contains the vectors v1=(1, 2, 2)v1=(1, 2, 2) and v2=(2, 3, 3)v2=(2, 3, 3).

  2. Find a basis T for R3R3 that contains the vectors v1=(3, 2, 1)v1=(3, 2, 1) and v2=(2, 2, 1)v2=(2, 2, 1).

  3. Find a basis T for R4R4 that contains the vectors v1=(1, 1, 1, 1)v1=(1, 1, 1, 1) and v2=(2, 3, 3, 3)v2=(2, 3, 3, 3).

  4. Find a basis T for R4R4 that contains the vectors v1=(3, 2, 3, 3)v1=(3, 2, 3, 3) and v2=(5, 4, 5, 5)v2=(5, 4, 5, 5).

Given a homogeneous system Ax=0Ax=0 of (scalar) linear equations, we say that a subset of these equations is irredundant provided that the corresponding column vectors of the transpose ATAT are linearly independent. In Problems 21–24, extract from each given system a maximal subset of irredundant equations.

  1. x13x2+2x3=02x1+3x2+2x3=04x13x2+6x3=0x13x2+2x32x1+3x2+2x34x13x2+6x3===000

     

  2. 3x1+2x2+2x3+2x4=02x1+3x2+3x3+3x4=08x1+7x2+7x3+7x4=03x1+2x2+2x3+2x42x1+3x2+3x3+3x48x1+7x2+7x3+7x4===000

     

  3. x1+2x2x3+2x4=03x1x2+3x3+x4=05x1+3x2+x3+5x4=02x1+5x3+4x4=0

     

  4. 3x1+2x2+2x3=02x1+3x2+3x3=07x1+8x2+8x3=08x1+7x2+7x3=05x1+6x2+5x3=0

     

  5. Explain why the rank of a matrix A is equal to the rank of its transpose AT.

  6. Explain why the n×n matrix A is invertible if and only if its rank is n.

Problems 27 through 32 explore the properties of nonhomogeneous systems.

  1. Let A be a 3×5 matrix whose three row vectors are linearly independent. Prove that, for each b in R3, the nonhomogeneous system Ax=b has a solution.

  2. Let A be a 5×3 matrix that has three linearly independent row vectors. Suppose that b is a vector in R5 such that the nonhomogeneous system Ax=b has a solution. Prove that this solution is unique.

  3. Let A be an m×n matrix with m<n. Given b in Rm, note that any solution of Ax=b expresses b as a linear combination of the column vectors of A. Then prove that no such solution is unique.

  4. Given the m×n matrix A with m>n, show that there exists a vector b in Rm such that the system Ax=b has no solution.

  5. Existence of Solutions Let A be an m×n matrix. Prove that the system Ax=b is consistent for every b in Rm if and only if the rank of A is equal to m.

  6. Uniqueness of Solutions Let A be an m×n matrix and suppose that the system Ax=b is consistent. Prove that its solution is unique if and only if the rank of A is equal to n.

  7. Prove that the pivot column vectors in (8) are linearly independent.

  8. Let A be a matrix with rank r, and suppose that A can be reduced to echelon form without row interchanges. Show that the first r row vectors of A are linearly independent.

  9. Deduce from Theorem 3 in Section 4.3 that the rank of the matrix A is the largest integer r such that A has a nonsingular r×r submatrix.

  10. Let A be the 3×2 matrix whose columns are the unit basis vectors i=(1, 0, 0) and j=(0, 1, 0). If B is the row-equivalent matrix obtained by adding the first and second rows of A to its third row, show that A and B do not have the same column space.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.173.232