3.4 Systems of Linear Equations—Computational Aspects

In Section 3.3, we obtained a necessary and sufficient condition for a system of linear equations to have solutions (Theorem 3.11 p. 174) and learned how to express the solutions to a nonhomogeneous system in terms of solutions to the corresponding homogeneous system (Theorem 3.9 p. 172). The latter result enables us to determine all the solutions to a given system if we can find one solution to the given system and a basis for the solution set of the corresponding homogeneous system. In this section, we use elementary row operations to accomplish these two objectives simultaneously. The essence of this technique is to transform a given system of linear equations into a system having the same solutions, but which is easier to solve (as in Section 1.4).

Definition.

Two systems of linear equations are called equivalent if they have the same solution set.

The following theorem and corollary give a useful method for obtaining equivalent systems.

Theorem 3.13.

Let Ax=b be a system of m linear equations in n unknowns, and let C be an invertible m×m matrix. Then the system (CA)x=Cb is equivalent to Ax=b.

Proof.

Let K be the solution set for Ax=b and K the solution set for (CA)x=Cb. If wK, then Aw=b. So (CA)w=Cb, and hence wK. Thus KK.

Conversely, if wK, then (CA)w=Cb. Hence

Aw=C1(CAw)=C1(Cb)=b;

so wK. Thus KK, and therefore, K=K.

Corollary.

Let Ax=b be a system of m linear equations in n unknowns. If (A|b) is obtained from (A|b) by a finite number of elementary row operations, then the system Ax=b is equivalent to the original system.

Proof.

Suppose that (A|b) is obtained from (A|b) by elementary row operations. These may be executed by multiplying (A|b) by elementary m×m matrices E1, E2, , Ep. Let C=Ep  E2E1; then

(A|b)=C(A|b)=(CA|Cb).

Since each Ei is invertible, so is C. Now A=CA and b=Cb. Thus by Theorem 3.13, the system Ax=b is equivalent to the system Ax=b.

We now describe a method for solving any system of linear equations. Consider, for example, the system of linear equations

3x1+2x2+3x32x4=1x1+x2+x3=3x1+2x2+x3x4=2.

First, we form the augmented matrix

(323211110312112).

By using elementary row operations, we transform the augmented matrix into an upper triangular matrix in which the first nonzero entry of each row is 1, and it occurs in a column to the right of the first nonzero entry of each preceding row. (Recall that matrix A is upper triangular if Aij=0 whenever i>j.)

  1. In the leftmost nonzero column, create a 1 in the first row. In our example, we can accomplish this step by interchanging the first and third rows. The resulting matrix is

    (121121110332321).
  2. By means of type 3 row operations, use the first row to obtain zeros in the remaining positions of the leftmost nonzero column. In our example, we must add 1 times the first row to the second row and then add 3 times the first row to the third row to obtain

    (121120101104015).
  3. Create a 1 in the next row in the leftmost possible column, without using previous row(s). In our example, the second column is the leftmost possible column, and we can obtain a 1 in the second row, second column by multiplying the second row by 1. This operation produces

    (121120101104015).
  4. Now use type 3 elementary row operations to obtain zeros below the 1 created in the preceding step. In our example, we must add four times the second row to the third row. The resulting matrix is

    (121120101100039).
  5. Repeat steps 3 and 4 on each succeeding row until no nonzero rows remain. (This creates zeros below the first nonzero entry in each row.) In our example, this can be accomplished by multiplying the third row by 13. This operation produces

    (121120101100013).

    We have now obtained the desired matrix. To complete the simplification of the augmented matrix, we must make the first nonzero entry in each row the only nonzero entry in its column. (This corresponds to eliminating certain unknowns from all but one of the equations.)

  6. Work upward, beginning with the last nonzero row, and add multiples of each row to the rows above. (This creates zeros above the first nonzero entry in each row.) In our example, the third row is the last nonzero row, and the first nonzero entry of this row lies in column 4. Hence we add the third row to the first and second rows to obtain zeros in row 1, column 4 and row 2, column 4. The resulting matrix is

    (121050100200013).
  7. Repeat the process described in step 6 for each preceding row until it is performed with the second row, at which time the reduction process is complete. In our example, we must add 2 times the second row to the first row in order to make the first row, second column entry become zero. This operation produces

    (101010100200013).

    We have now obtained the desired reduction of the augmented matrix. This matrix corresponds to the system of linear equations

    x1+x3=1x2=2x4=3.

Recall that, by the corollary to Theorem 3.13, this system is equivalent to the original system. But this system is easily solved. Obviously x2=2 and x4=3. Moreover, x1 and x3 can have any values provided their sum is 1. Letting x3=t, we then have x1=1t. Thus an arbitrary solution to the original system has the form

(1t2t3)=(1203)+t(1010).

Observe that

{(1010)}

is a basis for the homogeneous system of equations corresponding to the given system.

In the preceding example we performed elementary row operations on the augmented matrix of the system until we obtained the augmented matrix of a system having properties 1, 2, and 3 on page 28. Such a matrix has a special name.

Definition.

A matrix is said to be in reduced row echelon form if the following three conditions are satisfied.

  1. (a) Any row containing a nonzero entry precedes any row in which all the entries are zero (if any).

  2. (b) The first nonzero entry in each row is the only nonzero entry in its column.

  3. (c) The first nonzero entry in each row is 1 and it occurs in a column to the right of the first nonzero entry in the preceding row.

Example 1

  1. (a) The matrix on page 184 is in reduced row echelon form. Note that the first nonzero entry of each row is 1 and that the column containing each such entry has all zeros otherwise. Also note that each time we move downward to a new row, we must move to the right one or more columns to find the first nonzero entry of the new row.

  2. (b) The matrix

    (110010101),

    is not in reduced row echelon form, because the first column, which contains the first nonzero entry in row 1, contains another nonzero entry. Similarly, the matrix

    (010210010011),

    is not in reduced row echelon form, because the first nonzero entry of the second row is not to the right of the first nonzero entry of the first row. Finally, the matrix

    (200010),

    is not in reduced row echelon form, because the first nonzero entry of the first row is not 1.

It can be shown (see the corollary to Theorem 3.16) that the reduced row echelon form of a matrix is unique; that is, if different sequences of elementary row operations are used to transform a matrix into matrices Q and Q in reduced row echelon form, then Q=Q. Thus, although there are many different sequences of elementary row operations that can be used to transform a given matrix into reduced row echelon form, they all produce the same result.

The procedure described on pages 182-184 for reducing an augmented matrix to reduced row echelon form is called Gaussian elimination. It consists of two separate parts.

  1. In the forward pass (steps 1-5), the augmented matrix is transformed into an upper triangular matrix in which the first nonzero entry of each row is 1, and it occurs in a column to the right of the first nonzero entry of each preceding row.

  2. In the backward pass or back-substitution (steps 6-7), the upper triangular matrix is transformed into reduced row echelon form by making the first nonzero entry of each row the only nonzero entry of its column.

Of all the methods for transforming a matrix into its reduced row echelon form, Gaussian elimination requires the fewest arithmetic operations. (For large matrices, it requires approximately 50% fewer operations than the Gauss-Jordan method, in which the matrix is transformed into reduced row echelon form by using the first nonzero entry in each row to make zero all other entries in its column.) Because of this efficiency, Gaussian elimination is the preferred method when solving systems of linear equations on a computer. In this context, the Gaussian elimination procedure is usually modified in order to minimize roundoff errors. Since discussion of these techniques is inappropriate here, readers who are interested in such matters are referred to books on numerical analysis.

When a matrix is in reduced row echelon form, the corresponding system of linear equations is easy to solve. We present below a procedure for solving any system of linear equations for which the augmented matrix is in reduced row echelon form. First, however, we note that every matrix can be transformed into reduced row echelon form by Gaussian elimination. In the forward pass, we satisfy conditions (a) and (c) in the definition of reduced row echelon form and thereby make zero all entries below the first nonzero entry in each row. Then in the backward pass, we make zero all entries above the first nonzero entry in each row, thereby satisfying condition (b) in the definition of reduced row echelon form.

Theorem 3.14.

Gaussian elimination transforms any matrix into its reduced row echelon form.

We now describe a method for solving a system in which the augmented matrix is in reduced row echelon form. To illustrate this procedure, we consider the system

2x1+3x2+x3+4x49x5=17x1+x2+x3+x43x5=6x1+x2+x3+2x45x5=82x1+2x2+2x3+3x48x5=14,

for which the augmented matrix is

(23149171111361112582223814).

Applying Gaussian elimination to the augmented matrix of the system produces the following sequence of matrices.

(23149171111361112582223814)(11113623149171112582223814)(111136011235000122000122)(111136011235000122000000)(111014011011000122000000)(102023011011000122000000).

The system of linear equations corresponding to this last matrix is

x1+2x32x5=3x2x3+x5=1x42x5=2.

Notice that we have ignored the last row since it consists entirely of zeros.

To solve a system for which the augmented matrix is in reduced row echelon form, divide the variables into two sets. The first set consists of those variables that appear as leftmost variables in one of the equations of the system (in this case the set is {x1, x2, x4}). The second set consists of all the remaining variables (in this case, {x3, x5}). To each variable in the second set, assign a parametric value t1, t2,  (x3=t1, x5=t2), and then solve for the variables of the first set in terms of those in the second set:

x1=2x3+2x5+3=2t1+2t2+3x2=x3x5+1=t1t2+1x4=2x5+2=2t2+2.

Thus an arbitrary solution is of the form

(x1x2x3x4x5)=(2t1+2t2+3t1t2+1t12t2+2t2)=(31020)=t1(21100)+t2(21021),

where t1, t2R. Notice that

{(21100), (21021)}

is a basis for the solution set of the corresponding homogeneous system of equations and

(31020)

is a particular solution to the original system.

Therefore, in simplifying the augmented matrix of the system to reduced row echelon form, we are in effect simultaneously finding a particular solution to the original system and a basis for the solution set of the associated homogeneous system. Moreover, this procedure detects when a system is inconsistent, for by Exercise 3, solutions exist if and only if, in the reduction of the augmented matrix to reduced row echelon form, we do not obtain a row in which the only nonzero entry lies in the last column.

Thus to use this procedure for solving a system Ax=b of m linear equations in n unknowns, we need only begin to transform the augmented matrix (A|b) into its reduced row echelon form (A|b) by means of Gaussian elimination. If a row is obtained in which the only nonzero entry lies in the last column, then the original system is inconsistent. Otherwise, discard any zero rows from (A|b), and write the corresponding system of equations. Solve this system as described above to obtain an arbitrary solution of the form

s=s0+t1u1+t2u2+  +tnrunr,

where r is the number of nonzero rows in A(rm). The preceding equation is called a general solution of the system Ax=b. It expresses an arbitrary solution s of Ax=b in terms of nr parameters. The following theorem states that s cannot be expressed in fewer than nr parameters.

Theorem 3.15.

Let Ax=b be a system of r nonzero equations in n unknowns. Suppose that rank(A)=rank(A|b) and that (A|b) is in reduced row echelon form. Then

  1. (a) rank(A)=r.

  2. (b) If the general solution obtained by the procedure above is of the form

    s=s0+t1u1+t2u2+  +tnrunr,

    then {u1, u2, , unr} is a basis for the solution set of the corresponding homogeneous system, and s0 is a solution to the original system.

Proof.

Since (A|b) is in reduced row echelon form, (A|b) must have r nonzero rows. Clearly these rows are linearly independent by the definition of the reduced row echelon form, and so rank(A|b)=r. Thus rank(A)=r.

Let K be the solution set for Ax=b, and let KH be the solution set for Ax=0. Setting t1=t2=  =tnr=0, we see that s=s0K. But by Theorem 3.9 (p. 172), K={s0}+KH. Hence

KH={s0}+K=span({u1, u2, , unr}).

Because rank(A)=r, we have dim(KH)=nr. Thus since dim(KH)=nr and KH is generated by a set {u1, u2, , unr} containing at most nr vectors, we conclude that this set is a basis for KH.

An Interpretation of the Reduced Row Echelon Form

Let A be an m×n matrix with columns a1, a2, , an, and let B be the reduced row echelon form of A. Denote the columns of B by b1, b2, , bn. If the rank of A is r, then the rank of B is also r by the corollary to Theorem 3.4 (p. 152). Because B is in reduced row echelon form, no nonzero row of B can be a linear combination of the other rows of B. Hence B must have exactly r nonzero rows, and if r1, the vectors e1, e2, , er must occur among the columns of B. For i=1, 2, , r, let ji denote a column number of B such that bji=ei. We claim that aj1, aj2, , ajr, the columns of A corresponding to these columns of B, are linearly independent. For suppose that there are scalars c1, c2, , cr such that

c1aj1+c2aj2+  +crajr=0.

Because B can be obtained from A by a sequence of elementary row operations, there exists (as in the proof of the corollary to Theorem 3.13) an invertible m×m matrix M such that MA=B. Multiplying the preceding equation by M yields

c1Maj1+c2Maj2+  +crMajr=0.

Since Maji=bji=ei, it follows that

c1e1+c2e2+  +crer=0.

Hence c1=c2=  =cr=0, proving that the vectors aj1, aj2, , ajr are linearly independent.

Because B has only r nonzero rows, every column of B has the form

(d1d2dr00)

for scalars d1, d2, , dr. The corresponding column of A must be

M1(d1e1+d2e2++drer)=d1M1e1+d2M1e2++drM1er=d1M1bj1+d2M1bi2++drM1bjr=d1aj1+d2aj2++drajr.

The next theorem summarizes these results.

Theorem 3.16.

Let A be an m×n matrix of rank r, where r>0, and let B be the reduced row echelon form of A. Then

  1. (a) The number of nonzero rows in B is r.

  2. (b) For each i=1, 2, , r there is a column bji of B such that bji=ei.

  3. (c) The columns of A numbered j1, j2, , jr are linearly independent.

  4. (d) For each k=1, 2, , n, if column k of B is d1e1+d2e2++drer, then column k of A is d1aj1+d2aj2++drajr.

Corollary.

The reduced row echelon form of a matrix is unique.

Proof.

Exercise. (See Exercise 15.)

Example 2

Let

A=(24624123112480036759).

The reduced row echelon form of A is

B=(12040001100000100000).

Since B has three nonzero rows, the rank of A is 3. The first, third, and fifth columns of B are e1, e2, and e3; so Theorem 3.16(c) asserts that the first, third, and fifth columns of A are linearly independent.

Let the columns of A be denoted a1, a2, a3, a4, and a5. Because the second column of B is 2e1, it follows from Theorem 3.16(d) that a2=2a1, as is easily checked. Moreover, since the fourth column of B is 4e1+(1)e2, the same result shows that

a4=4a1+(1)a3.

In Example 6 of Section 1.6, we extracted a basis for R3 from the generating set

S={(2, 3, 5), (8, 12, 20), (1, 0, 2), (0, 2, 1), (7, 2, 0)}.

The procedure described there can be streamlined by using Theorem 3.16. We begin by noting that if S were linearly independent, then S would be a basis for R3. In this case, it is clear that S is linearly dependent because S contains more than dim(R3)=3 vectors. Nevertheless, it is instructive to consider the calculation that is needed to determine whether S is linearly dependent or linearly independent. Recall that S is linearly dependent if there are scalars c1, c2, c3, c4, and c5, not all zero, such that

c1(2, 3, 5)+c2(8, 12, 20)+c3(1, 0, 2)+c4(0, 2, 1)+c5(7, 2, 0)=(0, 0, 0).

Thus S is linearly dependent if and only if the system of linear equations

2c1+8c2+c3+7c5=03c112c2+2c4+2c5=05c1+20c22c3c4=0

has a nonzero solution. The augmented matrix of this system of equations is

A=(28107031202205202100),

and its reduced row echelon form is

B=(140020001030000140).

Using the technique described earlier in this section, we can find nonzero solutions of the preceding system, confirming that S is linearly dependent. However, Theorem 3.16(c) gives us additional information. Since the first, third, and fourth columns of B are e1, e2, and e3, we conclude that the first, third, and fourth columns of A are linearly independent. But the columns of A other than the last column (which is the zero vector) are vectors in S. Hence

β={(2, 3, 5), (1, 0, 2), (0, 2, 1)}

is a linearly independent subset of S. If follows from (b) of Corollary 2 to the replacement theorem (p. 48) that β is a basis for R3.

Because every finite-dimensional vector space over F is isomorphic to Fn for some n, a similar approach can be used to reduce any finite generating set to a basis. This technique is illustrated in the next example.

Example 3

The set

S={2+x+2x2+3x3, 4+2x+4x2+6x3, 6+3x+8x2+7x3, 2+x+5x3, 4+x+9x3}

generates a subspace V of P3(R). To find a subset of S that is a basis for V, we consider the subset

S={(2, 1, 2, 3), (4, 2, 4, 6), (6, 3, 8, 7), (2, 1, 0, 5), (4, 1, 0, 9)}

consisting of the images of the polynomials in S under the standard representation of P3(R) with respect to the standard ordered basis. Note that the 4×5 matrix in which the columns are the vectors in S is the matrix A in Example 2. From the reduced row echelon form of A, which is the matrix B in Example 2, we see that the first, third, and fifth columns of A are linearly independent and the second and fourth columns of A are linear combinations of the first, third, and fifth columns. Hence

{(2, 1, 2, 3), (6, 3, 8, 7), (4, 1, 0, 9)}

is a basis for the subspace of R4 that is generated by S. It follows that

{2+x+2x2+3x3, 6+3x+8x2+7x3, 4+x+9x3}

is a basis for the subspace V of P3(R).

We conclude this section by describing a method for extending a linearly independent subset S of a finite-dimensional vector space V to a basis for V. Recall that this is always possible by (c) of Corollary 2 to the replacement theorem (p. 48). Our approach is based on the replacement theorem and assumes that we can find an explicit basis β for V. Let S be the ordered set consisting of the vectors in S followed by those in β. Since βS, the set S generates V. We can then apply the technique described above to reduce this generating set to a basis for V containing S.

Example 4

Let

V={(x1, x2, x3, x4, x5)R5: x1+7x2+5x34x4+2x5=0}.

It is easily verified that V is a subspace of R5 and that

S={(2, 0, 0, 1, 1), (1, 1, 2, 1, 1), (5, 1, 0, 1, 1)}

is a linearly independent subset of V.

To extend S to a basis for V, we first obtain a basis β for V. To do so, we solve the system of linear equations that defines V. Since in this case V is defined by a single equation, we need only write the equation as

x1=7x25x3+4x42x5

and assign parametric values to x2, x3, x4, and x5. If x2=t1, x3=t2, x4=t3, and x5=t4, then the vectors in V have the form

(x1, x2, x3, x4, x5)=(7t15t2+4t32t4, t1, t2, t3, t4)=t1(7, 1, 0, 0, 0)+t2(5, 0, 1, 0, 0)+t3(4, 0, 0, 1, 0)+t4(2, 0, 0, 0, 1).

Hence

β={(7, 1, 0, 0, 0), (5, 0, 1, 0, 0), (4, 0, 0, 1, 0), (2, 0, 0, 0, 1)}

is a basis for V by Theorem 3.15.

The matrix whose columns consist of the vectors in S followed by those in β is

(21575420111000020010011100101110001),

and its reduced row echelon form is

(10011010100.5000011.50000000110000000).

Thus

{(2, 0, 0, 1, 1), (1, 1, 2, 1, 1), (5, 1, 0, 1, 1), (4, 0, 0, 1, 0)}

is a basis for V containing S.

Exercises

  1. Label the following statements as true or false.

    1. (a) If (A|b) is obtained from (A|b) by a finite sequence of elementary column operations, then the systems Ax=b and Ax=b are equivalent.

    2. (b) If (A|b) is obtained from (A|b) by a finite sequence of elementary row operations, then the systems Ax=b and Ax=b are equivalent.

    3. (c) If A is an n×n matrix with rank n, then the reduced row echelon form of A is In.

    4. (d) Any matrix can be put in reduced row echelon form by means of a finite sequence of elementary row operations.

    5. (e) If (A|b) is in reduced row echelon form, then the system Ax=b is consistent.

    6. (f) Let Ax=b be a system of m linear equations in n unknowns for which the augmented matrix is in reduced row echelon form. If this system is consistent, then the dimension of the solution set of Ax=0 is nr, where r equals the number of nonzero rows in A.

    7. (g) If a matrix A is transformed by elementary row operations into a matrix A in reduced row echelon form, then the number of nonzero rows in A equals the rank of A.

  2. Use Gaussian elimination to solve the following systems of linear equations.

    1. (a) x1+2x2x3=12x1+2x2+x3=13x1+5x22x3=1

    2. (b) x12x2x3=12x13x2+x3=63x15x2=7x1+5x3=9

    3. (c) x1+2x2+2x4=63x1+5x2x3+6x4=172x1+4x2+x3+2x4=122x17x3+11x4=7

    4. (d) x1x22x3+3x4=632x1x2+6x3+6x4=22x1+x24x33x4=03x12x2+9x3+10x4=5

    5. (e) x14x2x3+x4=32x18x2+x34x4=9x1+4x22x3+5x4=6

    6. (f) x1+2x2x3+3x4=22x1+4x2x3+6x4=5x2+2x4=3

    7. (g) 2x12x2x3+6x42x5=1x1x2+x3+2x4x5=24x14x2+5x3+7x4x5=6

    8. (h)3x1x2+x3x4+2x5=5x1x2x32x4x5=25x12x2+x33x4+3x5=102x1x22x4+x5=5

    9. (i) 3x1x2+2x3+4x4+x5=2x1x2+2x3+3x4+x5=12x13x2+6x3+9x4+4x5=57x12x2+4x3+8x4+x5=6

    10. (j) 2x1+3x34x5=53x14x2+8x3+3x4=8x1x2+2x3+x4x5=22x1+5x29x33x45x5=8

  3. Suppose that the augmented matrix of a system Ax=b is transformed into a matrix (A|b) in reduced row echelon form by a finite sequence of elementary row operations.

    1. (a) Prove that rank(A)rank(A|b) if and only if (A|b) contains a row in which the only nonzero entry lies in the last column.

    2. (b) Deduce that Ax=b is consistent if and only if (A|b) contains no row in which the only nonzero entry lies in the last column.

  4. For each of the systems that follow, apply Exercise 3 to determine whether the system is consistent. If the system is consistent, find all solutions. Finally, find a basis for the solution set of the corresponding homogeneous system.

    1. (a) x1+2x2x3+x4=22x1+x2+x3x4=3x1+2x23x3+2x4=2

    2. (b) x1+x23x3+x4=2x1+x2+x3x4=2x1+x2x3=0

    3. (c) x1+x23x3+x4=1x1+x2+x3x4=2x1+x2x3=0

  5. Let the reduced row echelon form of A be

    (102020150300016).

    Determine A if the first, second, and fourth columns of A are

    (113),(011),and(120),

    respectively.

  6. Let the reduced row echelon form of A be

    (130405001302000011000000).

    Determine A if the first, third, and sixth columns of A are

    (1213),(1124),and(3925),

    respectively.

  7. It can be shown that the vectors u1=(2, 3, 1), u2=(1, 4, 2), u3=(8, 12, 4), u4=(1, 37, 17) and u5=(3, 5, 8) generate R3. Find a subset of {u1, u2, u3, u4, u5} that is a basis for R3.

  8. Let W denote the subspace of R5 consisting of all vectors having coordinates that sum to zero. The vectors

    u1=(2, 3, 4, 5, 2),u2=(6, 9, 12, 15, 6),u3=(3, 2, 7, 9, 1),u4=(2, 8, 2, 2, 6),u5=(1, 1, 2, 1, 3),u6=(0, 3, 18, 9, 12),u7=(1, 0, 2, 3, 2),andu8=(2, 1, 1, 9, 7)

    generate W. Find a subset of {u1, u2, , u8} that is a basis for W.

  9. Let W be the subspace of M2×2(R) consisting of the symmetric 2×2 matrices. The set

    S={(0111), (1223), (2119), (1224), (1221)}

    generates W. Find a subset of S that is a basis for W.

  10. Let

    V={(x1, x2, x3, x4, x5)R5: x12x2+3x3x4+2x5=0}.
    1. (a) Show that S={(0, 1, 1, 1, 0)} is a linearly independent subset of V.

    2. (b) Extend S to a basis for V.

  11. Let V be as in Exercise 10.

    1. (a) Show that S={(1, 2, 1, 0, 0)} is a linearly independent subset of V.

    2. (b) Extend S to a basis for V.

  12. Let V denote the set of all solutions to the system of linear equations

    x1x2+2x43x5+x6=02x1x2x3+3x44x5+4x6=0.
    1. (a) Show that S={(0, 1, 0, 1, 1, 0), (1, 0, 1, 1, 1, 0)} is a linearly independent subset of V.

    2. (b) Extend S to a basis for V.

  13. Let V be as in Exercise 12.

    1. (a) Show that S={(1, 0, 1, 1, 1, 0), (0, 2, 1, 1, 0, 0)} is a linearly independent subset of V.

    2. (b) Extend S to a basis for V.

  14. If (A|b) is in reduced row echelon form, prove that A is also in reduced row echelon form.

  15. Prove the corollary to Theorem 3.16: The reduced row echelon form of a matrix is unique. Visit goo.gl/cZVzxM for a solution.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.214.155