In Example 5 of Section 4.2 we solved the homogeneous linear system
We found that its solution space W consists of all those vectors x in that have the form
We therefore can visualize W as the plane in determined by the vectors and . The fact that every solution vector is a combination [as in (2)] of the particular solution vectors and gives us a tangible understanding of the solution space W of the system in (1).
More generally, we know from Theorem 2 in Section 4.2 that the solution set V of any homogeneous linear system is a subspace of . In order to understand such a vector space V better, we would like to find a minimal set of vectors in V such that every vector in V is a sum of scalar multiples of these particular vectors.
The vector w is called a linear combination of the vectors provided that there exist scalars such that
Given a vector w in , the problem of determining whether or not w is a linear combination of the vectors amounts to solving a linear system to see whether we can find scalars so that (3) holds.
To determine whether the vector in is a linear combination of the vectors and , we write the equation in matrix form:
—that is,
The augmented coefficient matrix
can be reduced by elementary row operations to echelon form:
We see now, from the third row, that our system is inconsistent, so the desired scalars and do not exist. Thus w is not a linear combination of and .
To express the vector as a linear combination of the vectors , and , we write the equation in the form
—that is,
The reduced echelon form of the augmented coefficient matrix of this system is
Thus is a free variable. With , back substitution yields and . For instance, gives , and , so
But yields , and , so w can also be expressed as
We have found not only that w can be expressed as a linear combination of the vectors but also that this can be done in many different ways (one for each choice of the parameter t).
We began this section with the observation that every solution vector of the linear system in (1) is a linear combination of the vectors and that appear in the right-hand side of Eq. (2). A brief way of saying this is that the vectors and span the solution space. More generally, suppose that are vectors in a vector space V. Then we say that the vectors span the vector space V provided that every vector in V is a linear combination of these k vectors. We may also say that the set of vectors is a spanning set for V.
The familiar unit vectors , and span , because every vector in can be expressed as the linear combination
of these three vectors i, j, and k.
If the vectors in the vector space V do not span V, we can ask about the subset of V consisting of all those vectors that are linear combinations of . The following theorem implies that this subset is always a subspace of V.
We say that the subspace W of Theorem 1 is the space spanned by the vectors (or is the span of the set of vectors). We sometimes write
Thus Example 3 implies that . The question as to whether a given vector w in lies in the subspace reduces to solving a linear system, as illustrated by Examples 1 and 2.
It is easy to verify that the space of Theorem 1 is the smallest subspace of V that contains all the vectors —meaning that every other subspace of V that contains these k vectors must also contain W (Problem 30).
Henceforth, when we solve a homogeneous system of linear equations, we generally will seek a set of solution vectors that span the solution space W of the system. Perhaps the most concrete way to describe a subspace W of a vector space V is to give explicitly a set of vectors that span W. And this type of representation is most useful and desirable (as well as most aesthetically pleasing) when each vector w in W is expressible in a unique way as a linear combination of . (For instance, each vector in is a unique linear combination of the vectors i, j, and k of Example 3.) But Example 2 demonstrates that a vector w may well be expressed in many different ways as a linear combination of given vectors .
Thus not all spanning sets enjoy the uniqueness property that we desire. Two questions arise:
Given a subspace W of a vector space V, does there necessarily exist a spanning set with the uniqueness property that we desire?
If so, how do we find such a spanning set for W?
The following definition provides the key to both answers.
We can immediately verify that any vector w in the subspace W spanned by the linearly independent vectors is uniquely expressible as a linear combination of these vectors. For
Hence, with for each , the linear independence of implies that . Thus
so we see that w can be expressed in only one way as a combination of the linearly independent vectors .
The standard unit vectors
in are linearly independent. The reason is that the equation
evidently reduces to
and thus has only the trivial solution .
To determine whether the vectors , and in are linearly independent, we write the equation as the linear system
and then solve for , and . The augmented coefficient matrix of this system reduces to the echelon form
so we see that the only solution is . Thus the vectors , and are linearly independent.
Observe that linear independence of the vectors actually is a property of the set whose elements are these vectors. Occasionally the phraseology “the set is linearly independent” is more convenient. For instance, any subset of a linearly independent set is a linearly independent set of vectors (Problem 29).
Now we show that the coefficients in a linear combination of the linearly independent vectors are unique. If both
and
then
so it follows that
Because the vectors are linearly independent, each of the coefficients in (7) must vanish. Therefore, , so we have shown that the linear combinations in (5) and (6) actually are identical. Hence, if a vector w is in the set , then it can be expressed in only one way as a linear combination of these linearly independent vectors.
A set of vectors is called linearly dependent provided it is not linearly independent. Hence the vectors are linearly dependent if and only if there exist scalars not all zero such that
In short, a (finite) set of vectors is linearly dependent provided that some nontrivial linear combination of them equals the zero vector.
Let , and . Then the equation is equivalent to the linear system
of three equations in four unknowns. Because this homogeneous system has more unknowns than equations, Theorem 3 in Section 3.3 implies that it has a nontrivial solution. Therefore we may conclude—without even solving explicitly for , and —that the vectors , and are linearly dependent. (It happens that
as you can verify easily.)
The argument in Example 6 may be generalized in an obvious way to prove that any set of more than n vectors in is linearly dependent. For if , then Eq. (8) is equivalent to a homogeneous linear system with more unknowns (k) than equations (n), so Theorem 3 in Section 3.3 yields a nontrivial solution.
We now look at the way in which the elements of a linearly dependent set of vectors “depend” on one another. We know that there exist scalars not all zero such that
Suppose that the pth coefficient is nonzero: . Then we can solve Eq. (9) for and next divide by to get
where for . Thus at least one of the linearly dependent vectors is a linear combination of the other . Conversely, suppose we are given a set of vectors with one of them dependent on the others as in Eq. (10). Then we can transpose all the terms to the left-hand side to get an equation of the form in (9) with . This shows that the vectors are linearly dependent. Therefore, we have proved that the vectors are linearly dependent if and only if at least one of them is a linear combination of the others.
For instance (as we saw in Section 4.1), two vectors are linearly dependent if and only if one of them is a scalar multiple of the other, in which case the two vectors are collinear. Three vectors are linearly dependent if and only if one of them is a linear combination of the other two, in which case the three vectors are coplanar.
In Theorem 4 of Section 4.1 we saw that the determinant provides a criterion for deciding whether three vectors in are linearly independent: The vectors in are linearly independent if and only if the determinant of the matrix
is nonzero. The proof given there in the three-dimensional case generalizes readily to the n-dimensional case. Given n vectors in , we consider the matrix
having these vectors as its column vectors. Then, by Theorem 2 in Section 3.6, if and only if A is invertible, in which case the system has only the trivial solution , so the vectors must be linearly independent.
We saw earlier that a set of more than n vectors in is always linearly dependent. The following theorem shows us how the determinant provides a criterion in the case of fewer than n vectors in .
Rather than including a complete proof, we will simply illustrate the “if” part of Theorem 3 in the case . Let , and be three vectors in such that the matrix
has a submatrix with nonzero determinant. Suppose, for instance, that
Then Theorem 2 implies that the three vectors , and in are linearly independent. Now suppose that . Then by deleting the second and fourth components of each vector in this equation, we find that . But the fact that are linearly independent implies that , and it now follows that are linearly independent.
In Problems 1–8, determine whether the given vectors are linearly independent or linearly dependent. Do this essentially by inspection—that is, without solving a linear system of equations.
In Problems 9–16, express the indicated vector w as a linear combination of the given vectors if this is possible. If not, show that it is impossible.
;
;
;
;
;
;
;
;
In Problems 17–22, three vectors , and are given. If they are linearly independent, show this; otherwise find a nontrivial linear combination of them that is equal to the zero vector.
In Problems 23–26, the vectors are known to be linearly independent. Apply the definition of linear independence to show that the vectors are also linearly independent.
Prove: If the (finite) set S of vectors contains the zero vector, then S is linearly dependent.
Prove: If the set S of vectors is linearly dependent and the (finite) set T contains S, then T is also linearly dependent. You may assume that and that with .
Show that if the (finite) set S of vectors is linearly independent, then any subset T of S is also linearly independent.
Suppose that the subspace U of the vector space V contains the vectors . Show that U contains the subspace spanned by these vectors.
Let S and T be sets of vectors in a vector space such that S is a subset of span(T). Show that span(S) is also a subset of span(T).
Let be linearly independent vectors in the set S of vectors. Prove: If no set of more than k vectors in S is linearly independent, then every vector in S is a linear combination of .
In Problems 33–35, let be vectors in and let
be the matrix with these vectors as its column vectors.
Prove: If some submatrix of A is the identity matrix, then are linearly independent.
Suppose that , that the vectors are linearly independent, and that B is a nonsingular matrix. Prove that the column vectors of the matrix AB are linearly independent.
Suppose that , that the vectors are linearly independent, and that B is a nonsingular matrix. Use Theorem 3 to show that the column vectors of AB are linearly independent.
3.140.195.225