4.3 Linear Combinations and Independence of Vectors

In Example 5 of Section 4.2 we solved the homogeneous linear system

x1+3x215x3+7x4=0x1+4x219x3+10x4=02x1+5x226x3+11x4=0.
(1)

We found that its solution space W consists of all those vectors x in R4 that have the form

x=s(3,4,1,0)+t(2,3,0,1).
(2)

We therefore can visualize W as the plane in R4 determined by the vectors v1=(3,4,1,0) and v2=(2,3,0,1). The fact that every solution vector is a combination [as in (2)] of the particular solution vectors v1 and v2 gives us a tangible understanding of the solution space W of the system in (1).

More generally, we know from Theorem 2 in Section 4.2 that the solution set V of any m×n homogeneous linear system Ax=0 is a subspace of Rn. In order to understand such a vector space V better, we would like to find a minimal set of vectors v1,v2,,vk in V such that every vector in V is a sum of scalar multiples of these particular vectors.

The vector w is called a linear combination of the vectors v1,v2,,vk provided that there exist scalars c1,c2,,ck such that

w=c1v1+c2v2++ckvk.
(3)

Given a vector w in Rn, the problem of determining whether or not w is a linear combination of the vectors v1,v2,,vk amounts to solving a linear system to see whether we can find scalars c1,c2,,ck so that (3) holds.

Example 1

To determine whether the vector w=(2,6,3) in R3 is a linear combination of the vectors v1=(1,2,1) and v2=(3,5,4), we write the equation c1v1+c2v2=w in matrix form:

c1[121]+c2[354]=[263]

—that is,

c1+3c2=22c15c2=6c1+4c2=3.

The augmented coefficient matrix

[132256143]

can be reduced by elementary row operations to echelon form:

[1320120019].

We see now, from the third row, that our system is inconsistent, so the desired scalars c1 and c2 do not exist. Thus w is not a linear combination of v1 and v2.

Example 2

To express the vector w=(7,7,11) as a linear combination of the vectors v1=(1,2,1), v2=(4,1,2), and v3=(3,1,3), we write the equation c1v1+c2v2+c3v3=w in the form

c1[121]+c2[412]+c3[315]=[7711]

—that is,

c14c23c3=72c1c2+c3=7c1+2c2+3c3=11.

The reduced echelon form of the augmented coefficient matrix of this system is

[101501130000].

Thus c3 is a free variable. With c3=t, back substitution yields c1=5t and c2=3t. For instance, t=1 gives c1=4, c2=2, and c3=1, so

w=4v1+2v2+v3.

But t=2 yields c1=7, c2=5, and c3=2, so w can also be expressed as

w=7v1+5v22v3.

We have found not only that w can be expressed as a linear combination of the vectors v1, v2, v3 but also that this can be done in many different ways (one for each choice of the parameter t).

We began this section with the observation that every solution vector of the linear system in (1) is a linear combination of the vectors v1 and v2 that appear in the right-hand side of Eq. (2). A brief way of saying this is that the vectors v1 and v2 span the solution space. More generally, suppose that v1,v2,,vk are vectors in a vector space V. Then we say that the vectors v1,v2,,vk span the vector space V provided that every vector in V is a linear combination of these k vectors. We may also say that the set S={v1,v2,,vk} of vectors is a spanning set for V.

Example 3

The familiar unit vectors i=(1,0,0), j=(0,1,0), and k=(0,0,1) span R3, because every vector x=(x1,x2,x3) in R3 can be expressed as the linear combination

x=x1i+x2j+x3k

of these three vectors i, j, and k.

If the vectors v1,v2,,vk in the vector space V do not span V, we can ask about the subset of V consisting of all those vectors that are linear combinations of v1,v2,,vk. The following theorem implies that this subset is always a subspace of V.

We say that the subspace W of Theorem 1 is the space spanned by the vectors v1,v2,,vk (or is the span of the set S={v1,v2,,vk} of vectors). We sometimes write

W=span(S)=span{v1,v2,,vk}.

Thus Example 3 implies that R3=span{i,j,k}. The question as to whether a given vector w in Rn lies in the subspace span{v1,v2,,vk} reduces to solving a linear system, as illustrated by Examples 1 and 2.

It is easy to verify that the space W=span{v1,v2,,vk} of Theorem 1 is the smallest subspace of V that contains all the vectors v1,v2,,vk—meaning that every other subspace of V that contains these k vectors must also contain W (Problem 30).

Linear Independence

Henceforth, when we solve a homogeneous system of linear equations, we generally will seek a set v1,v2,,vk of solution vectors that span the solution space W of the system. Perhaps the most concrete way to describe a subspace W of a vector space V is to give explicitly a set v1,v2,,vk of vectors that span W. And this type of representation is most useful and desirable (as well as most aesthetically pleasing) when each vector w in W is expressible in a unique way as a linear combination of v1,v2,,vk. (For instance, each vector in R3 is a unique linear combination of the vectors i, j, and k of Example 3.) But Example 2 demonstrates that a vector w may well be expressed in many different ways as a linear combination of given vectors v1,v2,,vk.

Thus not all spanning sets enjoy the uniqueness property that we desire. Two questions arise:

  • Given a subspace W of a vector space V, does there necessarily exist a spanning set with the uniqueness property that we desire?

  • If so, how do we find such a spanning set for W?

The following definition provides the key to both answers.

Remark

We can immediately verify that any vector w in the subspace W spanned by the linearly independent vectors v1,v2,,vk is uniquely expressible as a linear combination of these vectors. For

w=i=1kaivi=i=1kbiviimplies thati=1k(aibi)vi=0.

Hence, with ci=aibi for each i=1,,k, the linear independence of v1,v2,,vk implies that c1=c2==ck=0. Thus

i=1kaivi=i=1kbiviimplies thatai=bi for i=1,,k,

so we see that w can be expressed in only one way as a combination of the linearly independent vectors v1,v2,,vk.

Example 4

The standard unit vectors

e1=(1,0,0,,0),e2=(0,1,0,,0),en=(0,0,0,,1)

in Rn are linearly independent. The reason is that the equation

c1e1+c2e2++cnen=0

evidently reduces to

(c1,c2,,cn)=(0,0,,0)

and thus has only the trivial solution c1=c2==cn=0.

Example 5

To determine whether the vectors v1=(1,2,2,1), v2=(2,3,4,1), and v3=(3,8,7,5) in R4 are linearly independent, we write the equation c1v1+c2v2+c3v3=0 as the linear system

c1+2c2+3c3=02c1+3c2+8c3=02c1+4c2+7c3=0c1+c2+5c3=0

and then solve for c1, c2, and c3. The augmented coefficient matrix of this system reduces to the echelon form

[1230012000100000],

so we see that the only solution is c1=c2=c3=0. Thus the vectors v1, v2, and v3 are linearly independent.

Observe that linear independence of the vectors v1,v2,,vk actually is a property of the set S={v1,v2,,vk} whose elements are these vectors. Occasionally the phraseology “the set S={v1,v2,,vk} is linearly independent” is more convenient. For instance, any subset of a linearly independent set S={v1,v2,,vk} is a linearly independent set of vectors (Problem 29).

Now we show that the coefficients in a linear combination of the linearly independent vectors v1,v2,,vk are unique. If both

w=a1v1+a2v2++akvk
(5)

and

w=b1v1+b2v2++bkvk,
(6)

then

a1v1+a2v2++akvk=b1v1+b2v2++bkvk,

so it follows that

(a1b1)v1+(a2b2)v2++(akbk)vk=0.
(7)

Because the vectors v1,v2,,vk are linearly independent, each of the coefficients in (7) must vanish. Therefore, a1=b1,a2=b2,,ak=bk, so we have shown that the linear combinations in (5) and (6) actually are identical. Hence, if a vector w is in the set span{v1,v2,,vk}, then it can be expressed in only one way as a linear combination of these linearly independent vectors.

A set of vectors is called linearly dependent provided it is not linearly independent. Hence the vectors v1,v2,,vk are linearly dependent if and only if there exist scalars c1,c2,,ck not all zero such that

c1v1+c2v2++ckvk=0.
(8)

In short, a (finite) set of vectors is linearly dependent provided that some nontrivial linear combination of them equals the zero vector.

Example 6

Let v1=(2,1,3), v2=(5,2,4), v3=(3,8,6), and v4=(2,7,4). Then the equation c1v1+c2v2+c3v3+c4v4=0 is equivalent to the linear system

2c1+5c2+3c3+2c4=0c12c2+8c3+7c4=03c1+4c26c34c4=0

of three equations in four unknowns. Because this homogeneous system has more unknowns than equations, Theorem 3 in Section 3.3 implies that it has a nontrivial solution. Therefore we may conclude—without even solving explicitly for c1, c2, c3, and c4—that the vectors v1, v2, v3, and v4 are linearly dependent. (It happens that

2v1v2+3v34v4=0,

as you can verify easily.)

The argument in Example 6 may be generalized in an obvious way to prove that any set of more than n vectors in Rn is linearly dependent. For if k>n, then Eq. (8) is equivalent to a homogeneous linear system with more unknowns (k) than equations (n), so Theorem 3 in Section 3.3 yields a nontrivial solution.

We now look at the way in which the elements of a linearly dependent set of vectors v1,v2,,vk “depend” on one another. We know that there exist scalars c1,c2,,ck not all zero such that

c1v1+c2v2++ckvk=0.
(9)

Suppose that the pth coefficient is nonzero: cp0. Then we can solve Eq. (9) for cpvp and next divide by cp to get

vp=a1v1++ap1vp1+ap+1vp+1++akvk,
(10)

where ai=ci/cp for ip. Thus at least one of the linearly dependent vectors is a linear combination of the other k1. Conversely, suppose we are given a set of vectors v1,v2,,vk with one of them dependent on the others as in Eq. (10). Then we can transpose all the terms to the left-hand side to get an equation of the form in (9) with cp=10. This shows that the vectors are linearly dependent. Therefore, we have proved that the vectors v1,v2,,vk are linearly dependent if and only if at least one of them is a linear combination of the others.

For instance (as we saw in Section 4.1), two vectors are linearly dependent if and only if one of them is a scalar multiple of the other, in which case the two vectors are collinear. Three vectors are linearly dependent if and only if one of them is a linear combination of the other two, in which case the three vectors are coplanar.

In Theorem 4 of Section 4.1 we saw that the determinant provides a criterion for deciding whether three vectors in R3 are linearly independent: The vectors v1, v2, v3 in R3 are linearly independent if and only if the determinant of the 3×3 matrix

A=[v1v2v3]

is nonzero. The proof given there in the three-dimensional case generalizes readily to the n-dimensional case. Given n vectors v1,v2,,vn in Rn, we consider the n×n matrix

A=[v1v2vn]

having these vectors as its column vectors. Then, by Theorem 2 in Section 3.6, detA0 if and only if A is invertible, in which case the system Ac=0 has only the trivial solution c1=c2==cn=0, so the vectors v1,v2,,vn must be linearly independent.

We saw earlier that a set of more than n vectors in Rn is always linearly dependent. The following theorem shows us how the determinant provides a criterion in the case of fewer than n vectors in Rn.

Rather than including a complete proof, we will simply illustrate the “if” part of Theorem 3 in the case n=5, k=3. Let v1=(a1,a2,a3,a4,a5), v2=(b1,b2,b3,b4,b5), and v3=(c1,c2,c3,c4,c5) be three vectors in R5 such that the 5×3 matrix

A=[a1b1c1a2b2c2a3b3c3a4b4c4a5b5c5]

has a 3×3 submatrix with nonzero determinant. Suppose, for instance, that

|a1b1c1a3b3c3a5b5c5|0.

Then Theorem 2 implies that the three vectors u1=(a1,a3,a5), u2=(b1,b3,b5), and u3=(c1,c3,c5) in R3 are linearly independent. Now suppose that c1v1+c2v2+c3v3=0. Then by deleting the second and fourth components of each vector in this equation, we find that c1u1+c2u2+c3u3=0. But the fact that u1, u2, u3 are linearly independent implies that c1=c2=c3=0, and it now follows that v1, v2, v3 are linearly independent.

4.3 Problems

In Problems 1–8, determine whether the given vectors v1,v2,,vk are linearly independent or linearly dependent. Do this essentially by inspection—that is, without solving a linear system of equations.

  1. v1=(4,2,6,4), v2=(6,3,9,6)

     

  2. v1=(3,9,3,6), v2=(2,6,2,4)

     

  3. v1=(3,4), v2=(6,1), v3=(7,5)

     

  4. v1=(4,2,2), v2=(5,4,3), v3=(4,6,5), v4=(7,9,3)

     

  5. v1=(1,0,0), v2=(0,2,0), v3=(0,0,3)

     

  6. v1=(1,0,0), v2=(1,1,0), v3=(1,1,1)

     

  7. v1=(2,1,0,0), v2=(3,0,1,0), v3=(4,0,0,1)

     

  8. v1=(1,0,3,0), v2=(0,2,0,4), v3=(1,2,3,4)

In Problems 9–16, express the indicated vector w as a linear combination of the given vectors v1,v2,,vk if this is possible. If not, show that it is impossible.

  1. w=(1,0,7); v1=(5,3,4), v2=(3,2,5)

     

  2. w=(3,1,2); v1=(3,1,2), v2=(6,2,3)

     

  3. w=(1,0,0,1); v1=(7,6,4,5), v2=(3,3,2,3)

     

  4. w=(4,4,3,3); v1=(7,3,1,9), v2=(2,2,1,3)

     

  5. w=(5,2,2); v1=(1,5,3), v2=(5,3,4)

     

  6. w=(2,3,2,3); v1=(1,0,0,3), v2=(0,1,2,0), v3=(0,1,1,1)

     

  7. w=(4,5,6); v1=(2,1,4), v2=(3,0,1), v3=(1,2,1)

     

  8. w=(7,7,9,11); v1=(2,0,3,1), v2=(4,1,3,2), v3=(1,3,1,3)

In Problems 17–22, three vectors v1, v2, and v3 are given. If they are linearly independent, show this; otherwise find a nontrivial linear combination of them that is equal to the zero vector.

  1. v1=(1,0,1), v2=(2,3,4), v3=(3,5,2)

     

  2. v1=(2,0,3), v2=(4,5,6), v3=(2,1,3)

     

  3. v1=(2,0,3,0), v2=(5,4,2,1), v3=(2,1,1,1)

     

  4. v1=(1,1,1,1), v2=(2,1,1,1), v3=(3,1,4,1)

     

  5. v1=(3,0,1,2), v2=(1,1,0,1), v3=(1,2,1,0)

     

  6. v1=(3,9,0,5), v2=(3,0,9,7), v3=(4,7,5,0)

In Problems 23–26, the vectors {vi} are known to be linearly independent. Apply the definition of linear independence to show that the vectors {ui} are also linearly independent.

  1. u1=v1+v2, u2=v1v2

     

  2. u1=v1+v2, u2=2v1+3v2

     

  3. u1=v1, u2=v1+2v2, u3=v1+2v2+3v3

     

  4. u1=v2+v3, u2=v1+v3, u3=v1+v2

     

  5. Prove: If the (finite) set S of vectors contains the zero vector, then S is linearly dependent.

  6. Prove: If the set S of vectors is linearly dependent and the (finite) set T contains S, then T is also linearly dependent. You may assume that S={v1,v2,,vk} and that T={v1,v2,,vm} with m>k.

  7. Show that if the (finite) set S of vectors is linearly independent, then any subset T of S is also linearly independent.

  8. Suppose that the subspace U of the vector space V contains the vectors v1,v2,,vk. Show that U contains the subspace spanned by these vectors.

  9. Let S and T be sets of vectors in a vector space such that S is a subset of span(T). Show that span(S) is also a subset of span(T).

  10. Let v1,v2,,vk be linearly independent vectors in the set S of vectors. Prove: If no set of more than k vectors in S is linearly independent, then every vector in S is a linear combination of v1,v2,,vk.

In Problems 33–35, let v1,v2,,vk be vectors in Rn and let

A=[v1v2vk]

be the n×k matrix with these vectors as its column vectors.

  1. Prove: If some k×k submatrix of A is the k×k identity matrix, then v1,v2,,vk are linearly independent.

  2. Suppose that k=n, that the vectors v1,v2,,vk are linearly independent, and that B is a nonsingular n×n matrix. Prove that the column vectors of the matrix AB are linearly independent.

  3. Suppose that k<n, that the vectors v1,v2,,vk are linearly independent, and that B is a nonsingular k×k matrix. Use Theorem 3 to show that the column vectors of AB are linearly independent.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.246.247