2.3 Composition of Linear Transformations and Matrix Multiplication

In Section 2.2, we learned how to associate a matrix with a linear transformation in such a way that both sums and scalar multiples of matrices are associated with the corresponding sums and scalar multiples of the transformations. The question now arises as to how the matrix representation of a composite of linear transformations is related to the matrix representation of each of the associated linear transformations. The attempt to answer this question leads to a definition of matrix multiplication. We use the more convenient notation of UT rather than UTUT for the composite of linear transformations U and T. (See Appendix B.)

Our first result shows that the composite of linear transformations is linear.

Theorem 2.9.

Let V, W, and Z be vector spaces over the same field F, and let T:VWT:VW and U:WZU:WZ be linear. Then UT:VZUT:VZ is linear.

Proof.

Let x, yVx, yV and aFaF. Then

UT(ax+y)=U(T(ax+y))=U(aT(x)+T(y))=aU(T(x))+U(T(y))=a(UT)(x)+UT(y).
UT(ax+y)==U(T(ax+y))=U(aT(x)+T(y))aU(T(x))+U(T(y))=a(UT)(x)+UT(y).

The following theorem lists some of the properties of the composition of linear transformations.

Theorem 2.10.

Let V be a vector space. Let T, U1, U2L(V)T, U1, U2L(V). Then

  1. (a) T(U1+U2)=TU1+TU2T(U1+U2)=TU1+TU2 and (U1+U2)T=U1T+U2T(U1+U2)T=U1T+U2T

  2. (b) T(U1U2)=(TU1)U2T(U1U2)=(TU1)U2

  3. (c) TI=IT=TTI=IT=T

  4. (d) a(U1U2)=(aU1)U2=U1(aU2)a(U1U2)=(aU1)U2=U1(aU2) for all scalars a.

Proof.

Exercise.

A more general result holds for linear transformations that have domains unequal to their codomains. (See Exercise 8.)

If TL(V)TL(V), there are circumstances where it is natural to compose T with itself one or more times. In Example 6 of Section 2.1, for instance, we considered the linear transformation T:VVT:VV defined by T(f)=fT(f)=f, where V denotes the set of all real-valued functions on the real line that have derivatives of every order. In this context, TT(f)=T(f)=(f)=fTT(f)=T(f)=(f)=f is the second derivative of f, and TTT(f)=fTTT(f)=f is the third derivative of f. In this type of situation, the following notation is useful.

If TL(V)TL(V), we define T1=T, T2=TT, T3=T2TT1=T, T2=TT, T3=T2T, and, in general, Tk=Tk1TTk=Tk1T for k=2, 3, k=2, 3,  For convenience, we also define T0=IVT0=IV.

We now turn our attention to the multiplication of matrices. Let V, W, and Z be finite-dimensional vector spaces and T|:VWT|:VW and U:WZU:WZ be linear transformations. Suppose that A=[U]γβA=[U]γβ and B=[T]βαB=[T]βα, where α={v1, v2, , vn}, β={w1, w2, , wm}α={v1, v2, , vn}, β={w1, w2, , wm}, and γ={z1, z2, , zp}γ={z1, z2, , zp} are ordered bases for V, W, and Z, respectively. We would like to define the product AB of two matrices so that AB=[UT]γαAB=[UT]γα. Consider the matrix [UT]γα[UT]γα. For j=1, 2, , nj=1, 2, , n, we have

(UT)(vj)=U(T(vj))=U(mk=1Bkjwk)=mk=1BkjU(wk)=mk=1Bkj(pi=1Aikzi)=pi=1(mk=1AikBkj)zi=pi=1Cijzi,
(UT)(vj)=U(T(vj))=U(k=1mBkjwk)=k=1mBkjU(wk)=k=1mBkj(i=1pAikzi)=i=1p(k=1mAikBkj)zi=i=1pCijzi,

where

Cij=mk=1AikBkj.
Cij=k=1mAikBkj.

This computation motivates the following definition of matrix multiplication.

Definition.

Let A be an m×nm×n matrix and B be an n×pn×p matrix. We define the product of A and B, denoted AB, to be the m×pm×p matrix such that

(AB)ij=nk=1AikBkj    for 1im,    1jp.
(AB)ij=k=1nAikBkj    for 1im,    1jp.

Note that (AB)ij(AB)ij is the sum of products of corresponding entries from the ith row of A and the jth column of B. Some interesting applications of this definition are presented at the end of this section.

The reader should observe that in order for the product AB to be defined, there are restrictions regarding the relative sizes of A and B. The following mnemonic device is helpful: “(m×n)·(n×p)=(m×p)(m×n)(n×p)=(m×p)”; that is, in order for the product AB to be defined, the two “inner” dimensions must be equal, and the two “outer” dimensions yield the size of the product.

Example 1

We have

(121041)(425)=(1.4+2.2+1.50.4+4.2+(1).5)=(133).
(102411)425=(1.4+2.2+1.50.4+4.2+(1).5)=(133).

Notice again the symbolic relationship (2×3)·(3×1)=2×1(2×3)(3×1)=2×1.

As in the case with composition of functions, we have that matrix multiplication is not commutative. Consider the following two products:

(1100)(0110)=(1100)     and     (0110)(1100)=(0011).
(1010)(0110)=(1010)     and     (0110)(1010)=(0101).

Hence we see that even if both of the matrix products AB and BA are defined, it need not be true that AB=BAAB=BA.

Recalling the definition of the transpose of a matrix from Section 1.3, we show that if A is an m×nm×n matrix and B is an n×pn×p matrix, then (AB)t=BtAt(AB)t=BtAt. Since

(AB)tij=(AB)ji=nk=1AjkBki
(AB)tij=(AB)ji=k=1nAjkBki

and

(BtAt)ij=nk=1(Bt)ik(At)kj=nk=1BkiAjk,
(BtAt)ij=k=1n(Bt)ik(At)kj=k=1nBkiAjk,

we are finished. Therefore the transpose of a product is the product of the transposes in the opposite order.

Our definition of matrix multiplication was chosen so that the next theorem is true.

Theorem 2.11.

Let V, W, and Z be finite-dimensional vector spaces with ordered bases α, βα, β, and γγ, respectively. Let T:VWT:VW and U:WZU:WZ be linear transformations. Then

[UT]γα=[U]γβ[T]βα.
[UT]γα=[U]γβ[T]βα.

Corollary.

Let V be a finite-dimensional vector space with an ordered basis ββ. Let T, UL(V)T, UL(V). Then [UT]β=[U]β[T]β[UT]β=[U]β[T]β.

We illustrate Theorem 2.11 in the next example.

Example 2

Let U:P3(R)P2(R)U:P3(R)P2(R) and T:P2(R)P3(R)T:P2(R)P3(R) be the linear transformations respectively defined by

U(f(x))=f(x)       and      T(f(x))=x0f(t)dt.
U(f(x))=f(x)       and      T(f(x))=0xf(t)dt.

Let αα and ββ be the standard ordered bases of P3(R)P3(R) and P2(R)P2(R), respectively. From calculus, it follows that UT=IUT=I, the identity transformation on P2(R)P2(R). To illustrate Theorem 2.11, observe that

[UT]β=[U]βα[T]αβ=(010000200003)(00010001200013)=(100010001)=[I]β.
[UT]β=[U]βα[T]αβ=00010002000301000012000013=100010001=[I]β.

The next theorem provides analogs of (a), (c), and (d) of Theorem 2.10. Theorem 2.10(b) has its analog in Theorem 2.16. Observe also that part (c) of the next theorem illustrates that the identity matrix acts as a multiplicative identity in Mn×n(F)Mn×n(F). When the context is clear, we sometimes omit the subscript n from InIn.

Theorem 2.12.

Let A be an m×nm×n matrix, B and C be n×pn×p matrices, and D and E be q×mq×m matrices. Then

  1. (a) A(B+C)=AB+ACA(B+C)=AB+AC and (D+E)A=DA+EA(D+E)A=DA+EA.

  2. (b) a(AB)=(aA)B=A(aB)a(AB)=(aA)B=A(aB) for any scalar a.

  3. (c) ImA=A=AInImA=A=AIn.

Proof.

We prove the first half of (a) and (c) and leave the remaining proofs as an exercise. (See Exercise 5.)

(a) We have

[A(B+C)]ij=nk=1Aik(B+C)kj=nk=1Aik(Bkj+Ckj)=nk=1(AikBkj+AikCkj)=nk=1AikBkj+nk=1AikCkj=(AB)ij+(AC)ij=[AB+AC]ij.
[A(B+C)]ij=k=1nAik(B+C)kj=k=1nAik(Bkj+Ckj)=k=1n(AikBkj+AikCkj)=k=1nAikBkj+k=1nAikCkj=(AB)ij+(AC)ij=[AB+AC]ij.

So A(B+C)=AB+ACA(B+C)=AB+AC.

(c) We have

(ImA)ij=mk=1(Im)ikAkj=mk=1δikAkj=Aij.
(ImA)ij=k=1m(Im)ikAkj=k=1mδikAkj=Aij.

Corollary.

Let A be an m×nm×n matrix, B1, B2, , BkB1, B2, , Bk be n×pn×p matrices, C1, C2, , CkC1, C2, , Ck be q×mq×m matrices, and a1, a2, , aka1, a2, , ak be scalars. Then

A(ki=1aiBi)=ki=1aiABi
A(i=1kaiBi)=i=1kaiABi

and

(ki=1aiCi)A=ki=1aiCiA.
(i=1kaiCi)A=i=1kaiCiA.

Proof.

Exercise.

For an n×nn×n matrix A, we define A1=A, A2=AA, A3=A2AA1=A, A2=AA, A3=A2A, and, in general, Ak=Ak1AAk=Ak1A for k=2, 3, ….k=2, 3, …. We define A0=InA0=In.

With this notation, we see that if

A=(0010),
A=(0100),

then A2=OA2=O (the zero matrix) even though AOAO. Thus the cancellation property for multiplication in fields is not valid for matrices. To see why, assume that the cancellation law is valid. Then, from AA=A2=O=AOAA=A2=O=AO, we would conclude that A=OA=O, which is false.

Theorem 2.13.

Let A be an m×nm×n matrix and B be an n×pn×p matrix. For each j, j=1, 2, , pj, j=1, 2, , p let ujuj and vjvj denote the jth columns of AB and B, respectively. Then

  1. (a) uj=Avjuj=Avj

  2. (b) vj=Bejvj=Bej, where ejej is the jth standard vector of FpFp.

Proof.

(a) We have

uj=((AB)1j(AB)2j(AB)mj)=(nk=1A1kBkjnk=1A2kBkjnk=1AmkBkj)=A(B1jB2jBnj)=Avj.
uj=(AB)1j(AB)2j(AB)mj=k=1nA1kBkjk=1nA2kBkjk=1nAmkBkj=AB1jB2jBnj=Avj.

Hence (a) is proved. The proof of (b) is left as an exercise. (See Exercise 6.)

It follows (see Exercise 14) from Theorem 2.13 that column j of AB is a linear combination of the columns of A with the coefficients in the linear combination being the entries of column j of B. An analogous result holds for rows; that is, row i of AB is a linear combination of the rows of B with the coefficients in the linear combination being the entries of row i of A.

The next result justifies much of our past work. It utilizes both the matrix representation of a linear transformation and matrix multiplication in order to evaluate the transformation at any given vector.

Theorem 2.14.

Let V and W be finite-dimensional vector spaces having ordered bases ββ and γγ, respectively, and let T:VWT:VW be linear. Then, for each uVuV, we have

[T(u)]γ=[T]γβ[u]β.
[T(u)]γ=[T]γβ[u]β.

Proof.

Fix uVuV, and define the linear transformations f:FVf:FV by f(a)=auf(a)=au and g:FWg:FW by g(a)=aT(u)g(a)=aT(u) for all aFaF. Let α={1}α={1} be the standard ordered basis for F. Notice that g=Tfg=Tf. Identifying column vectors as matrices and using Theorem 2.11, we obtain

[T(u)]γ=[g(1)]γ=[g]γα=[Tf]γα=[T]γβ[f]βα=[T]γβ[f(1)]β=[T]γβ[u]β.
[T(u)]γ=[g(1)]γ=[g]γα=[Tf]γα=[T]γβ[f]βα=[T]γβ[f(1)]β=[T]γβ[u]β.

Example 3

Let T:P3(R)P2(R)T:P3(R)P2(R) be the linear transformation defined by T(f(x))=f(x)T(f(x))=f(x), and let ββ and γγ be the standard ordered bases for P3(R)P3(R) and P2(R)P2(R), respectively. If A=[T]γβA=[T]γβ, then, from Example 4 of Section 2.2, we have

A=(010000200003).
A=000100020003.

We illustrate Theorem 2.14 by verifying that [T(p(x))]γ=[T]γβ[p(x)]β[T(p(x))]γ=[T]γβ[p(x)]β, where p(x)P3(R)p(x)P3(R) is the polynomial p(x)=24x+x2+3x3p(x)=24x+x2+3x3. Let q(x)=T(p(x))q(x)=T(p(x)); then q(x)=p(x)=4+2x+9x2q(x)=p(x)=4+2x+9x2. Hence

[T(p(x))]γ=[q(x)]γ=(429),
[T(p(x))]γ=[q(x)]γ=429,

but also

[T]γβ[p(x)]β=A[p(x)]β=(010000200003)(2413)=(429).
[T]γβ[p(x)]β=A[p(x)]β=0001000200032413=429.

We complete this section with the introduction of the left-multiplication transformation LALA, where A is an m×nm×n matrix. This transformation is probably the most important tool for transferring properties about transformations to analogous properties about matrices and vice versa. For example, we use it to prove that matrix multiplication is associative.

Definition.

Let A be an m×nm×n matrix with entries from a field F. We denote by LALA the mapping LA:FnFmLA:FnFm defined by LA(x)=AxLA(x)=Ax (the matrix product of A and x) for each column vector xFnxFn. We call LALA a left-multiplication transformation.

Example 4

Let

A=(121012).
A=(102112).

Then AM2×3(R)AM2×3(R) and LA:R3R2LA:R3R2. If

x=(131),
x=131,

then

LA(x)=Ax=(121012)(131)=(61).
LA(x)=Ax=(102112)131=(61).

We see in the next theorem that not only is LALA linear, but, in fact, it has a great many other useful properties. These properties are all quite natural and so are easy to remember.

Theorem 2.15.

Let A be an m×nm×n matrix with entries from F. Then the left-multiplication transformation LA:FnFmLA:FnFm is linear. Furthermore, if B is any other m×nm×n matrix (with entries from F) and ββ and γγ are the standard ordered bases for FnFn and FmFm, respectively, then we have the following properties.

  1. (a) [LA]γβ=A[LA]γβ=A

  2. (b) LA=LBLA=LB if and only if A=BA=B.

  3. (c) LA+B=LA+LBLA+B=LA+LB and LaA=aLALaA=aLA for all aFaF.

  4. (d) If T:FnFmT:FnFm is linear, then there exists a unique m×nm×n matrix C such that T=LCT=LC. In fact, C=[T]γβC=[T]γβ.

  5. (e) If E is an n×pn×p matrix, then LAE=LALELAE=LALE.

  6. (f) If m=nm=n, then LIn=IFnLIn=IFn.

Proof.

The fact that LALA is linear follows immediately from Theorem 2.12.

  1. (a) The jth column of [LA]γβ[LA]γβ is equal to LA(ej)LA(ej). However LA(ej)=AejLA(ej)=Aej, which is also the jth column of A by Theorem 2.13(b). So [LA]γβ=A[LA]γβ=A.

  2. (b) If LA=LBLA=LB, then we may use (a) to write A=[LA]γβ=[LB]γβ=BA=[LA]γβ=[LB]γβ=B. Hence A=BA=B. The proof of the converse is trivial.

  3. (c) The proof is left as an exercise. (See Exercise 7.)

  4. (d) Let C=[T]γβC=[T]γβ. By Theorem 2.14, we have [T(x)]γ=[T]γβ[x]β[T(x)]γ=[T]γβ[x]β, or T(x)=Cx=LC(x)T(x)=Cx=LC(x) for all xFnxFn. So T=LCT=LC. The uniqueness of C follows from (b).

  5. (e) For any j(1jp)j(1jp), we may apply Theorem 2.13 several times to note that (AE)ej(AE)ej is the jth column of AE and that the jth column of AE is also equal to A(Eej)A(Eej). So (AE)ej=A(Eej)(AE)ej=A(Eej). Thus

    LAE(ej)=(AE)ej=A(Eej)=LA(Eej)=LA(LE(ej)).
    LAE(ej)=(AE)ej=A(Eej)=LA(Eej)=LA(LE(ej)).

    Hence LAE=LALELAE=LALE by the corollary to Theorem 2.6 (p. 73).

  6. (f) The proof is left as an exercise. (See Exercise 7.)

We now use left-multiplication transformations to establish the associativity of matrix multiplication.

Theorem 2.16.

Let A, B, and C be matrices such that A(BC) is defined. Then (AB)C is also defined and A(BC)=(AB)CA(BC)=(AB)C; that is, matrix multiplication is associative.

Proof.

It is left to the reader to show that (AB)C is defined. Using (e) of Theorem 2.15 and the associativity of functional composition (see Appendix B), we have

LA(BC)=LALBC=LA(LBLC)=(LALB)LC=LABLC=L(AB)C.
LA(BC)=LALBC=LA(LBLC)=(LALB)LC=LABLC=L(AB)C.

So from (b) of Theorem 2.15, it follows that A(BC)=(AB)CA(BC)=(AB)C.

Needless to say, this theorem could be proved directly from the definition of matrix multiplication (see Exercise 19). The proof above, however, provides a prototype of many of the arguments that utilize the relationships between linear transformations and matrices.

Applications*

For an application of matrix multiplication to the study of population growth, visit goo.gl/x5XDLw.

A large and varied collection of interesting applications arises in connection with special matrices called incidence matrices. An incidence matrix is a square matrix in which all the entries are either zero or one and, for convenience, all the diagonal entries are zero. If we have a relationship on a set of n objects that we denote by 1, 2, , n1, 2, , n then we define the associated incidence matrix A by Aij=1Aij=1 if i is related to j, and Aij=0Aij=0 otherwise.

To make things concrete, suppose that we have four people, each of whom owns a communication device. If the relationship on this group is “can transmit to,” then Aij=1Aij=1 if i can send a message to j, and Aij=0Aij=0 otherwise. Suppose that

A=(0100100101011100).
A=0101101100000110.

Then since A34=1A34=1 and A14=0A14=0, we see that person 3 can send to 4 but 1 cannot send to 4.

We obtain an interesting interpretation of the entries of A2. Consider, for instance,

(A2)31=A31A11+A32A21+A33A31+A34A41.
(A2)31=A31A11+A32A21+A33A31+A34A41.

Note that any term A3kAk1A3kAk1 equals 1 if and only if both A3kA3k and Ak1Ak1 equal 1, that is, if and only if 3 can send to k and k can send to 1. Thus (A2)31(A2)31 gives the number of ways in which 3 can send to 1 in two stages (namely, 3 to 2 to 1 and 3 to 4 to 1). Since

A2=(1001120021011101),
A2=1121021100001011,

we see that there are two ways 3 can send to 1 in two stages. In general, (A+A2++Am)ij(A+A2++Am)ij is the number of ways in which i can send to j in at most m stages.

A maximal collection of three or more people with the property that any two can send to each other is called a clique. The problem of determining cliques is difficult, but there is a simple method for determining if someone belongs to a clique. If we define a new matrix B by Bij=1Bij=1 if i and j can send to each other, and Bij=0Bij=0 otherwise, then it can be shown (see Exercise 20) that person i belongs to a clique if and only if (B3)ii>0(B3)ii>0. For example, suppose that the incidence matrix associated with some relationship is

A=(0101101011011110).
A=0111101101011010.

To determine which people belong to cliques, we form the matrix B, described earlier, and compute B3B3. In this case,

B=(0101101001011010)     and      B3=(0404404004044040).
B=0101101001011010     and      B3=0404404004044040.

Since all the diagonal entries of B3B3 are zero, we conclude that there are no cliques in this relationship.

Our final example of the use of incidence matrices is concerned with the concept of dominance. A relation among a group of people is called a dominance relation if the associated incidence matrix A has the property that for all distinct pairs i and j, Aij=1Aij=1 if and only if Aji=0Aji=0, that is, given any two people, exactly one of them dominates (or, using the terminology of our first example, can send a message to) the other. Since A is an incidence matrix, Aii=0Aii=0 for all i. For such a relation, it can be shown (see Exercise 22) that the matrix A+A2A+A2 has a row [column] in which each entry is positive except for the diagonal entry. In other words, there is at least one person who dominates [is dominated by] all others in one or two stages. In fact, it can be shown that any person who dominates [is dominated by] the greatest number of people in the first stage has this property. Consider, for example, the matrix

A=(0101000100100100100111100).
A=0010110011010011010000010.

The reader should verify that this matrix corresponds to a dominance relation. Now

A+A2=(0211110110120211220122220).
A+A2=0111220222110221120210110.

Thus persons 1, 3, 4, and 5 dominate (can send messages to) all the others in at most two stages, while persons 1, 2, 3, and 4 are dominated by (can receive messages from) all the others in at most two stages.

Exercises

  1. Label the following statements as true or false. In each part, V, W, and Z denote vector spaces with ordered (finite) bases α, βα, β, and γγ, respectively; T:VWT:VW and U:WZU:WZ denote linear transformations; and A and B denote matrices.

    1. (a) [UT]γα=[T]βα[U]γβ[UT]γα=[T]βα[U]γβ.

    2. (b) [T(v)]β=[T]βα[v]α[T(v)]β=[T]βα[v]α for all vVvV.

    3. (c) [U(w)]β=[U]βα[w]β[U(w)]β=[U]βα[w]β for all wWwW.

    4. (d) [IV]α=I[IV]α=I.

    5. (e) [T2]βα=([T]βα)2[T2]βα=([T]βα)2.

    6. (f) A2=IA2=I implies that A=I   or    A=IA=I   or    A=I.

    7. (g) T=LAT=LA for some matrix A.

    8. (h) A2=OA2=O implies that A=OA=O, where O denotes the zero matrix.

    9. (i) LA+B=LA+LBLA+B=LA+LB.

    10. (j) If A is square and Aij=δijAij=δij for all i and j, then A=IA=I.

    1. (a) Let

      A=(1321),          B=(103412),C=(114120),      and      D=(223).
      A=(1231),          B=(140132),C=(111240),      and      D=223.

      Compute A(2B+3C), (AB)DA(2B+3C), (AB)D and A(BD).

    2. (b) Let

      A=(253142), B=(320114553),    and   C=(403).
      A=234512, B=315215043,    and   C=(403).

      Compute At, AtB, BCt, CBAt, AtB, BCt, CB and CACA.

  2. Let g(x)=3+xg(x)=3+x. Let T:P2(R)P2(R)T:P2(R)P2(R) and U:P2(R)R3U:P2(R)R3 be the linear transformations respectively defined by

    T(f(x))=f(x)g(x)+2f(x)  and   U(a+bx+cx2)=(a+b, c, ab).
    T(f(x))=f(x)g(x)+2f(x)  and   U(a+bx+cx2)=(a+b, c, ab).

    Let ββ and γγ be the standard ordered bases of P2(R)P2(R) and R3R3, respectively.

    1. (a) Compute [U]γβ, [T]β[U]γβ, [T]β and [UT]γβ[UT]γβ directly. Then use Theorem 2.11 to verify your result.

    2. (b) Let h(x)=32x+x2h(x)=32x+x2. Compute [h(x)]β[h(x)]β and [U(h(x))]γ[U(h(x))]γ. Then use [U]γβ[U]γβ from (a) and Theorem 2.14 to verify your result.

  3. For each of the following parts, let T be the linear transformation defined in the corresponding part of Exercise 5 of Section 2.2. Use Theorem 2.14 to compute the following vectors:

    1. (a) [T(A)]α[T(A)]α, where A=(1416)A=(1146).

    2. (b) [T(f(x))]α[T(f(x))]α, where f(x)=46x+3x2f(x)=46x+3x2.

    3. (c) [T(A)]γ[T(A)]γ, where A=(1324)A=(1234).

    4. (d) [T(f(x))]γ[T(f(x))]γ, where f(x)=6x+2x2f(x)=6x+2x2.

  4. Complete the proof of Theorem 2.12 and its corollary.

  5. Prove (b) of Theorem 2.13.

  6. Prove (c) and (f) of Theorem 2.15.

  7. Prove Theorem 2.10. Now state and prove a more general result involving linear transformations with domains unequal to their codomains.

  8. Find linear transformations U, T:F2F2U, T:F2F2 such that UT=T0UT=T0 (the zero transformation) but TUT0TUT0. Use your answer to find matrices A and B such that AB=OAB=O but BAOBAO.

  9. Let A be an n×nn×n matrix. Prove that A is a diagonal matrix if and only if Aij=δijAijAij=δijAij for all i and j.

  10. Let V be a vector space, and let T:VVT:VV be linear. Prove that T2=T0T2=T0 if and only if R(T)N(T)R(T)N(T).

  11. Let V, W, and Z be vector spaces, and let T:VWT:VW and U:WZU:WZ be linear.

    1. (a) Prove that if UT is one-to-one, then T is one-to-one. Must U also be one-to-one?

    2. (b) Prove that if UT is onto, then U is onto. Must T also be onto?

    3. (c) Prove that if U and T are one-to-one and onto, then UT is also.

  12. Let A and B be n×nn×n matrices. Recall that the trace of A is defined by

    tr(A)=ni=1Aii.
    tr(A)=i=1nAii.

    Prove that tr(AB)=tr(BA)tr(AB)=tr(BA) and tr(A)=tr(At)tr(A)=tr(At).

  13. Assume the notation in Theorem 2.13.

    1. (a) Suppose that z is a (column) vector in FpFp. Use Theorem 2.13(b) to prove that Bz is a linear combination of the columns of B. In particular, if z=(a1, a2, , ap)tz=(a1, a2, , ap)t, then show that

      Bz=pj=1ajvj.
      Bz=j=1pajvj.
    2. (b) Extend (a) to prove that column j of AB is a linear combination of the columns of A with the coefficients in the linear combination being the entries of column j of B.

    3. (c) For any row vector wFmwFm, prove that wAwA is a linear combination of the rows of A with the coefficients in the linear combination being the coordinates of w. Hint: Use properties of the transpose operation applied to (a).

    4. (d) Prove the analogous result to (b) about rows: Row i of AB is a linear combination of the rows of B with the coefficients in the linear combination being the entries of row i of A.

  14. Let A and B be matrices for which the product matrix AB is defined, and let ujuj and vjvj denote the jth columns of AB and B, respectively. If vp=c1vj1+c2vj2++ckvjkvp=c1vj1+c2vj2++ckvjk for some scalars c1, c2, , ckc1, c2, , ck prove that up=c1uj1+c2uj2++ckujkup=c1uj1+c2uj2++ckujk. Visit goo.gl/sRpves for a solution.

  15. Let V be a finite-dimensional vector space, and let T:VVT:VV be linear.

    1. (a) If rank(T)=rank(T2)rank(T)=rank(T2), prove that R(T)N(T)={0}R(T)N(T)={0}. Deduce that V=R(T)N(T)V=R(T)N(T) (see the exercises of Section 1.3).

    2. (b) Prove that V=R(Tk)N(Tk)V=R(Tk)N(Tk) for some positive integer k.

  16. For the definition of projection and related facts, see pages 7677. Let V be a vector space and T:VVT:VV be a linear transformation. Prove that T=T2T=T2 if and only if T is a projection on W1={y:T(y)=y}W1={y:T(y)=y} along N(T).

  17. Let ββ be an ordered basis for a finite-dimensional vector space V, and let T:VVT:VV be linear. Prove that, for any nonnegative integer k, [Tk]β=([T]β)k[Tk]β=([T]β)k.

  18. Using only the definition of matrix multiplication, prove that, multiplication of matrices is associative.

  19. For an incidence matrix A with related matrix B defined by Bij=1Bij=1 if i is related to j and j is related to i, and Bij=0Bij=0 otherwise, prove that i belongs to a clique if and only if (B3)ii>0(B3)ii>0.

  20. Use Exercise 20 to determine the cliques in the relations corresponding to the following incidence matrices.

    1. (a) (0101100001011010)0101101000011010

    2. (b) (0011100110011010)0111000010011110

  21. Let A be an incidence matrix that is associated with a dominance relation. Prove that the matrix A+A2A+A2 has a row [column] in which each entry is positive except for the diagonal entry.

  22. Prove that the matrix

    A=(010001100)
    A=001100010

    corresponds to a dominance relation. Use Exercise 22 to determine which persons dominate [are dominated by] each of the others within two stages.

  23. Let A be an n×nn×n incidence matrix that corresponds to a dominance relation. Determine the number of nonzero entries of A.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.107.100