2.1 Linear Transformations, Null Spaces, and Ranges

In this section, we consider a number of examples of linear transformations. Many of these transformations are studied in more detail in later sections. Recall that a function T with domain V and codomain W is denoted by T:VW. (See Appendix B.)

Definitions.

Let V and W be vector spaces over the same Geld F. We call a function T:VW a linear transformation from V to W if, for all x,yV and cF, we have

  1. T(x+y)=T(x)+T(y) and

  2. T(cx)=cT(x).

If the underlying field F is the field of rational numbers, then (a) implies (b) (see Exercise 38), but, in general (a) and (b) are logically independent. See Exercises 39 and 40.

We often simply call T linear. The reader should verify the following properties of a function T:VW. (See Exercise 7.)

  1. If T is linear, then T(0)=0.

  2. T is linear if and only if T(cx+y)=cT(x)+T(y) for all x,yV and cF.

  3. If T is linear, then T(xy)=T(x)T(y) for all x,yV.

  4. T is linear if and only if, for x1,x2,,xnV and a1,a2,,anF, we have

    T(i=1naixi)=i=1naiT(xi).

    We generally use property 2 to prove that a given transformation is linear.

Example 1

Define

T:R2R2by T(a1,a2)=(2a1+a2,a1).

To show that T is linear, let cR and x,yR2, where x=(b1,b2) and y=(d1,d2). Since

cx+y=(cb1+d1,cb2+d2),

we have

T(cx+y)=(2(cb1+d1)+cb2+d2,cb1+d1).

Also

cT(x)+T(y)=c(2b1+b2,b1)+(2d1+d2,d1)=(2cb1+cb2+2d1+d2,cb1+d1)=(2(cb1+d1)+cb2+d2,cb1+d1).

So T is linear.

As we will see in Chapter 6, the applications of linear algebra to geometry are wide and varied. The main reason for this is that most of the important geometrical transformations are linear. Three particular transformations that we now consider are rotation, reflection, and projection. We leave the proofs of linearity to the reader.

Example 2

For any angle θ, define Tθ:R2R2 by the rule: Tθ(a1,a2) is the vector obtained by rotating (a1,a2) counterclockwise by θ if (a1,a2)(0,0), and Tθ(0,0)=(0,0). Then Tθ:R2R2 is a linear transformation that is called the rotation by θ.

We determine an explicit formula for Tθ. Fix a nonzero vector (a1,a2)R2. Let α be the angle that (a1,a2) makes with the positive x-axis (see Figure 2.1(a)), and let r=a12+a22. Then a1=rcosα and a2=rsinα. Also, Tθ(a1,a2) has length r and makes an angle α+θ with the positive x-axis. It follows that

Tθ(a1,a2)=(rcos(α+θ),rsin(α+θ))=(rcosαcosθrsinαsinθ,rcosαsinθ+rsinαcosθ)=(a1cosθa2sinθ,a1sinθ+a2cosθ).

Finally, observe that this same formula is valid for (a1,a2)=(0,0). It is now easy to show, as in Example 1, that Tθ is linear.

A diagram of three types of linear transformation.

Figure 2.1

Example 3

Define T:R2R2 by T(a1,a2)=(a1,a2). T is called the reflection about the x-axis. (See Figure 2.1(b).)

Example 4

Define T:R2R2 by T(a1,a2)=(a1,0). T is called the projection on the x-axis. (See Figure 2.1(c).)

We now look at some additional examples of linear transformations.

Example 5

Define T:Mm×n(F)Mn×m(F) by T(A)=At, where At is the transpose of A, defined in Section 1.3. Then T is a linear transformation by Exercise 3 of Section 1.3.

Example 6

Let V denote the set of all real-valued functions defined on the real line that have derivatives of every order. It is easily shown that V is a vector space over R. (See Exercise 16 of Section 1.3.)

Define T:VV by T(f)=f, the derivative of f. To show that T is linear, let g,hV and aR. Now

T(ag+h)=(ag+h)=ag+h=aT(g)+T(h).

So by property 2 above, T is linear.

Example 7

Let V=C(R), the vector space of continuous real-valued functions on R. Let a,bR,a<b. Define T:VR by

T(f)=abf(tdt

for all fV. Then T is a linear transformation because the definite integral of a linear combination of functions is the same as the linear combination of the definite integrals of the functions.

Two very important examples of linear transformations that appear frequently in the remainder of the book, and therefore deserve their own notation, are the identity and zero transformations.

For vector spaces V and W (over F), we define the identity transformation IV:VV by IV(x)=x for all xV and the zero transformation T0:VW by T0(x)=0 for all xV. It is clear that both of these transformations are linear. We often write l instead of IV.

We now turn our attention to two very important sets associated with linear transformations: the range and null space. The determination of these sets allows us to examine more closely the intrinsic properties of a linear transformation.

Definitions.

Let V and W be vector spaces, and let T:VW be linear. We define the null space (or kernel) N(T) of T to be the set of all vectors x in V such that T(x)=0; that is, N(T)={xV:T(x)=0}.

We define the range (or image) R(T) of T to be the subset of W consisting of all images (under T) of vectors in V; that is, R(T)={T(x):xV}.

Example 8

Let V and W be vector spaces, and let I:VV and T0:VW be the identity and zero transformations, respectively. Then N(I)={0},R(I)=V,N(T0)=V, and R(T0)={0}.

Example 9

Let T:R3R2 be the linear transformation defined by

T(a1,a2,a3)=(a1a2,2a3).

It is left as an exercise to verify that

N(T)={(a,a,0):aR}       and      R(T)=R2.

In Examples 8 and 9, we see that the range and null space of each of the linear transformations is a subspace. The next result shows that this is true in general.

Theorem 2.1.

Let V and W be vector spaces and T:VW be linear. Then N(T) and R(T) are subspaces of V and W, respectively.

Proof.

To clarify the notation, we use the symbols 0V and 0W to denote the zero vectors of V and W, respectively.

Since T(0V)=0W, we have that 0VN(T). Let x,yN(T) and cF. Then T(x+y)=T(x)+T(y)=0W+0W=0W, and T(cx)=cT(x)=c0W=0W. Hence x+yN(T) and cxN(T), so that N(T) is a subspace of V.

Because T(0V)=0W, we have that 0WR(T). Now let x,yR(T) and cF. Then there exist v and w in V such that T(v)=x and T(w)=y. So T(v+w)=T(v)+T(w)=x+y, and T(cv)=cT(v)=cx. Thus x+yR(T) and cxR(T), so R(T) is a subspace of W.

The next theorem provides a method for finding a spanning set for the range of a linear transformation. With this accomplished, a basis for the range is easy to discover using the technique of Example 6 of Section 1.6.

Theorem 2.2.

Let V and W be vector spaces, and let T:VW be linear. If β={v1,v2,,vn} is a basis for V, then

R(T)=span(T(β))=span({T(v1),T(v2),,T(vn)}).

Proof.

Clearly T(vi)R(T) for each i. Because R(T) is a subspace, R(T) contains span({T(v1),T(v2),....,T(vn)})=span(T(β)) by Theorem 1.1 (p. 31).

Now suppose that wR(T). Then w=T(v) for some vV. Because β is a basis for V, we have

v=i=1naivi      for some a1,a2,,anF.

Since T is linear, it follows that

w=T(v)=i=1naiT(vi)span(T(β)).

So R(T) is contained in span(T(β)).

It should be noted that Theorem 2.2 is true if β is infinite, that is, R(T)=span({T(v):vβ}). (See Exercise 34.)

The next example illustrates the usefulness of Theorem 2.2.

Example 10

Define the linear transformation T:P2(R)M2×2(R) by

T(f(x))=(f(1)f(2)00f(0)).

Since β={1,x,x2} is a basis for P2(R), we have

R(T)=span(T(β))=span({T(1),T(x),T(x2)})=span({(0001),(1000),(3000)})=span({(0001),(1000)}).

Thus we have found a basis for R(T), and so dim(R(T))=2.

Now suppose that we want to find a basis for N(T). Note that f(x)N(T) if and only if T(f(x))=O, the 2×2 zero matrix. That is, f(x)N(T) if and only if

(f(1)f(2)00f(0))=(0000).

Let f(x)=a+bx+cx2. Then

0=f(1)f(2)=(a+b+c)(a+2b+4c)=b3c

and 0=f(0)=a. Hence

f(x)=a+bx+cx2=3cx+cx2=c(3x+x2).

Therefore a basis for N(T) is {3x+x2}.

Note that in this example

dim(N(T))+dim(R(T))=1+2=3=dim(P2(R)).

In Theorem 2.3, we see that a similar result is true in general.

As in Chapter 1, we measure the “size” of a subspace by its dimension. The null space and range are so important that we attach special names to their respective dimensions.

Definitions.

Let V and W be vector spaces, and let T:VW be linear. If N(T) and R(T) are finite-dimensional, then we define the nullity of T, denoted nullity(T), and the rank of T, denoted rank(T), to be the dimensions of N(T) and R(T), respectively.

Reflecting on the action of a linear transformation, we see intuitively that the larger the nullity, the smaller the rank. In other words, the more vectors that are carried into 0, the smaller the range. The same heuristic reasoning tells us that the larger the rank, the smaller the nullity. This balance between rank and nullity is made precise in the next theorem, appropriately called the dimension theorem.

Theorem 2.3. (Dimension Theorem).

Let V and W be vector spaces, and let T:VW be linear. If V is finite-dimensional, then

nullity(T)+rank(T)=dim(V).

Proof.

Suppose that dim(V)=n,dim(N(T))=k, and {v1,v2,,vk} is a basis for N(T). By the corollary to Theorem 1.11 (p. 51), we may extend {v1,v2,,vk} to a basis β={v1,v2,,vn} for V. We claim that S={T(vk+1),T(vk+2),,T(vn)} is a basis for R(T).

First we prove that S generates R(T). Using Theorem 2.2 and the fact that T(vi)=0 for 1ik, we have

R(T)=span({T(v1),T(v2),,T(vn)})=span({T(vk+1),T(vk+2),,T(vn)})=span(S).

Now we prove that S is linearly independent. Suppose that

i=k+1nbiT(vi)=0     for bk+1,bk+2,,bnF.

Using the fact that T is linear, we have

T(i=k+1nbivi)=0.

So

i=k+1nbiviN(T).

Hence there exist c1,c2,,ckF such that

i=k+1nbivi=i=1kcivi    or     i=1k(ci)vi+i=k+1nbivi=0.

Since β is a basis for V, we have bi=0 for all i. Hence S is linearly independent. Notice that this argument also shows that T(vk+1),T(vk+2),,T(vn) are distinct; therefore rank(T)=nk.

If we apply the dimension theorem to the linear transformation T in Example 9, we have that nullity(T)+2=3,so nullity(T)=1.

The reader should review the concepts of “one-to-one” and “onto” presented in Appendix B. Interestingly, for a linear transformation, both of these concepts are intimately connected to the rank and nullity of the transformation. This is demonstrated in the next two theorems.

Theorem 2.4.

Let V and W be vector spaces, and let T:VW be linear. Then T is one-to-one if and only if N(T)={0}.

Proof.

Suppose that T is one-to-one and xN(T). Then T(x)=0=T(0). Since T is one-to-one, we have x=0. Hence N(T)={0}.

Now assume that N(T)={0}, and suppose that T(x)=T(y). Then 0=T(x)T(y)=T(xy) by property 3 on page 65. Therefore xyN(T)={0}. So xy=0, or x=y. This means that T is one-to-one.

The reader should observe that Theorem 2.4 allows us to conclude that the transformation defined in Example 9 is not one-to-one.

Surprisingly, the conditions of one-to-one and onto are equivalent in an important special case.

Theorem 2.5.

Let V and W be finite-dimensional vector spaces of equal dimension, and let T:VW be linear. Then the following are equivalent.

  1. (a) T is one-to-one.

  2. (b) T is onto.

  3. (c) rank(T)=dim(V).

Proof.

From the dimension theorem, we have

nullity(T)+rank(T)=dim(V).

Now, with the use of Theorem 2.4, we have that T is one-to-one if and only if N(T)={0}, if and only if nullity(T)=0, if and only if rank(T)=dim(V), and if and only if rank(T)=dim(W). By Theorem 1.11 (p. 50), this equality is equivalent to R(T)=W, the definition of T being onto.

We note that if V is not finite-dimensional and T:VV is linear, then it does not follow that one-to-one and onto are equivalent. (See Exercises 15, 16, and 21.)

The linearity of T in Theorems 2.4 and 2.5 is essential, for it is easy to construct examples of functions from R into R that are not one-to-one, but are onto, and vice versa.

The next two examples make use of the preceding theorems in determining whether a given linear transformation is one-to-one or onto.

Example 11

Let T:P2(R)P3(R) be the linear transformation defined by

T(f(x))=2f(x)+0x3f(t)dt.

Now

R(T)=span({T(1),T(x),T(x2)})=span({3x,2+32x2,4x+x3}).

Since {3x,2+32x2,4x+x3} is linearly independent, rank(T)=3. Since dim(P3(R))=4, T is not onto. From the dimension theorem, nullity(T)+3=3. So nullity(T)=0, and therefore, N(T)={0}. We conclude from Theorem 2.4 that T is one-to-one.

Example 12

Let T:F2F2 be the linear transformation defined by

T(a1,a2)=(a1+a2,a1).

It is easy to see that N(T)={0}; so T is one-to-one. Hence Theorem 2.5 tells us that T must be onto.

In Exercise 14, it is stated that if T is linear and one-to-one, then a subset S is linearly independent if and only if T(S) is linearly independent. Example 13 illustrates the use of this result.

Example 13

Let T:P2(R)R3 be the linear transformation defined by

T(a0+a1x+a2x2)=(a0,a1,a2).

Clearly T is linear and one-to-one. Let S={2x+3x2,x+x2,12x2}. Then S is linearly independent in P2(R) because

T(S)={(2,1,3),(0,1,1),(1,0,2)}

is linearly independent in R3.

In Example 13, we transferred a property from the vector space of polynomials to a property in the vector space of 3-tuples. This technique is exploited more fully later.

One of the most important properties of a linear transformation is that it is completely determined by its action on a basis. This result, which follows from the next theorem and corollary, is used frequently throughout the book.

Theorem 2.6.

Let V and W be vector spaces over F, and suppose that {v1,v2,,vn} is a basis for V. For w1,w2,,wn in W, there exists exactly one linear transformation T:VW such that T(vi)=wi for i=1,2,,n.

Proof.

Let xV. Then

x=i=1naivi,

where a1,a2,,an are unique scalars. Define

T:VW      by     T(x)=i=1naiwi.

(a) T is linear: Suppose that u,vV and dF. Then we may write

u=i=1nbivi    and     v=i=1ncivi

for some scalars b1,b2,,bn,c1,c2,,cn. Thus

du+v=i=1n(dbi+ci)vi.

So

T(du+v)=i=1n(dbi+ci)wi=di=1nciwi=dT(u)+T(v).

(b) Clearly

T(vi)=wi   for  i=1,2,,n.

(c) T is unique: Suppose that U:VW is linear and U(vi)=wi for i=1,2,,n. Then for xV with

x=i=1naivi,

we have

U(x)=i=1naiU(vi)=i=1naiwi=T(x).

Hence U=T.

Corollary.

Let V and W be vector spaces, and suppose that V has a finite basis {v1,v2,,vn}. If U,T:VW are linear and U(vi)=T(vi) for i=1,2,,n then U=T.

Example 14

Let T:R2R2 be the linear transformation defined by

T(a1,a2)=(2a2a1,3a1),

and suppose that U:R2R2 is linear. If we know that U(1,2)=(3,3) and U(1,1)=(1,3), then U=T. This follows from the corollary and from the fact that {(1,2),(1,1)} is a basis for R2.

Exercises

  1. Label the following statements as true or false. In each part, V and W are finite-dimensional vector spaces (over F), and T is a function from V to W.

    1. (a) If T is linear, then T preserves sums and scalar products.

    2. (b) If T(x+y)=T(x)+T(y), then T is linear.

    3. (c) T is one-to-one if and only if the only vector x such that T(x)=0 is x=0.

    4. (d) If T is linear, then T(0V)=0W.

    5. (e) If T is linear, then nullity(T)+rank(T)=dim(W).

    6. (f) If T is linear, then T carries linearly independent subsets of V onto linearly independent subsets of W.

    7. (g) If T, U:VW are both linear and agree on a basis for V, then T=U.

    8. (h) Given x1,x2V and y1,y2W, there exists a linear transformation T:VW such that T(x1)=y1 and T(x2)=y2.

For Exercise 2 through 6, prove that T is a linear transformation, and find bases for both N(T) and R(T). Then compute the nullity and rank of T, and verify the dimension theorem. Finally, use the appropriate theorems in this section to determine whether T is one-to-one or onto.

  1. T:R3R2 defined by T(a1,a2,a3)=(a1a2,2a3).

  2. T:R2R3 defined by T(a1,a2)=(a1+a2,0,2a1a2).

  3. T:M2×3(F)M2×2(F) defined by

    T(a11a12a13a21a22a23)=(2a11a12a13+2a1200).
  4. T:P2(R)P3(R) defined by T(f(x))=xf(x)+f(x).

  5. T:Mn×n(F)F defined by T(A)=tr(A). Recall (Example 4, Section 1.3) that

    tr(A)=i=1nAii.
  6. Prove properties 1, 2, 3, and 4 on page 65.

  7. Prove that the transformations in Examples 2 and 3 are linear.

  8. In this exercise, T:R2R2 is a function. For each of the following parts, state why T is not linear.

    1. (a) T(a1,a2)=(1,a2)

    2. (b) T(a1,a2)=(a1,a12)

    3. (c) T(a1,a2)=(sin a1,0)

    4. (d) T(a1,a2)=(|a1|,a2)

    5. (e) T(a1,a2)=(a1+1,a2)

  9. Suppose that T:R2R2 is linear, T(1,0)=(1,4), and T(1,1)=(2,5). What is T(2, 3)? Is T one-to-one?

  10. Prove that there exists a linear transformation T:R2R3 such that T(1,1)=(1,0,2) and T(2,3)=(1,1,4). What is T(8, 11)?

  11. Is there a linear transformation T:R3R2 such that T(1,0,3)=(1,1) and T(2,0,6)=(2,1)?

  12. Let V and W be vector spaces, let T:VW be linear, and let {w1,w2,,wk} be a linearly independent set of k vectors from R(T). Prove that if S={v1,v2,,vk} is chosen so that T(vi)=wi for i=1,2,,k then S is linearly independent. Visit goo.gl/kmaQS2 for a solution.

  13. Let V and W be vector spaces and T:VW be linear.

    1. (a) Prove that T is one-to-one if and only if T carries linearly independent subsets of V onto linearly independent subsets of W.

    2. (b) Suppose that T is one-to-one and that S is a subset of V. Prove that S is linearly independent if and only if T(S) is linearly independent.

    3. (c) Suppose β={v1,v2,,vn} is a basis for V and T is one-to-one and onto. Prove that T(β)={T(v1),T(v2),,T(vn)} is a basis for W.

  14. Recall the definition of P(R) on page 11. Define

    T:P(R)P(R)    by     T(f(x))=0xf(t) dt.

    Prove that T linear and one-to-one, but not onto.

  15. Let T:P(R)P(R) be defined by T(f(x))=f(x). Recall that T is linear. Prove that T is onto, but not one-to-one.

  16. Let V and W be finite-dimensional vector spaces and T:VW be linear.

    1. (a) Prove that if dim(V)<dim(W), then T cannot be onto.

    2. (b) Prove that if dim(V)>dim(W), then T cannot be one-to-one.

  17. Give an example of a linear transformation T:R2R2 such that N(T)=R(T).

  18. Give an example of vector spaces V and W and distinct linear transformations T and U from V to W such that N(T)=N(U) and R(T)=R(U).

  19. Let V and W be vector spaces with subspaces V1 and W1, respectively. If T:VW is linear, prove that T(V1) is a subspace of W and that {xV:T(x)W1} is a subspace of V.

  20. Let V be the vector space of sequences described in Example 5 of Section 1.2. Define the functions T,U:VV by

    T(a1,a2,)=(a2,a3,)    and    U(a1,a2,)=(0,a1,a2,).

    T and U are called the left shift and right shift operators on v, respectively.

    1. (a) Prove that T and U are linear.

    2. (b) Prove that T is onto, but not one-to-one.

    3. (c) Prove that U is one-to-one, but not onto.

  21. Let T:R3R be linear. Show that there exist scalars a, b, and c such that T(x,y,z)=ax+by+cz for all (x,y,z)R3. Can you generalize this result for T:FnF? State and prove an analogous result for T:FnFm.

  22. Let T:R3R be linear. Describe geometrically the possibilities for the null space of T. Hint: Use Exercise 22.

  23. Let T:VW be linear, bW, and K={xV:T(x)=b} be nonempty. Prove that if sK, then K={s}+N(T). (See page 22 for the definition of the sum of subsets.)

The following definition is used in Exercises 2528 and in Exercise 31.

Definitions.

Let V be a vector space and W1 and W2 be subspaces of V such that V=W1W2. (Recall the definition of direct sum given on page 22.) The function T:VV defined by T(x)=x1 where x=x1+x2 with x1W1 and x2W2, is called the projection of V on W1 or the projection on W1 along W2.

  1. Let T:R2R2. Include figures for each of the following parts.

    1. (a) Find a formula for T(a, b), where T represents the projection on the y-axis along the x-axis.

    2. (b) Find a formula for T(a, b), where T represents the projection on the y-axis along the line L={(s,s):sR}.

  2. Let T:R3R3.

    1. (a) If T(a,b,c)=(a,b,0), show that T is the projection on the xy- plane along the z-axis.

    2. (b) Find a formula for T(a, b, c), where T represents the projection on the z-axis along the xy-plane.

    3. (c) If T(a,b,c)=(ac,b,0), show that T is the projection on the xy-plane along the line L={(a,0,a):aR}.

  3. Using the notation in the definition above, assume that T:VV is the projection on W1 along W2.

    1. (a) Prove that T is linear and W1={xV:T(x)=x}.

    2. (b) Prove that W1=R(T) and W2=N(T).

    3. (c) Describe T if W1=V.

    4. (d) Describe T if W1 is the zero subspace.

  4. Suppose that W is a subspace of a finite-dimensional vector space V.

    1. (a) Prove that there exists a subspace W and a function T:VV such that T is a projection on W along W.

    2. (b) Give an example of a subspace W of a vector space V such that there are two projections on W along two (distinct) subspaces.

The following definitions are used in Exercises 2933.

Definitions.

Let V be a vector space, and let T:VV be linear. A subspace W of V is said to be T-invariant if T(x)W for every xW, that is, T(W)W. If W is T-invariant, we define the restriction of T on W to be the function TW:WW defined by TW(x)=T(x) for all xW.

Exercises 2933 assume that W is a subspace of a vector space V and that T:VV is linear. Warning: Do not assume that W is T-invariant or that T is a projection unless explicitly stated.

  1. Prove that the subspaces {0}, V, R(T), and N(T) are all T-invariant.

  2. If W is T-invariant, prove that TW is linear.

  3. Suppose that T is the projection on W along some subspace W. Prove that W is T-invariant and that TW=IW.

  4. Suppose that V=R(T)W and W is T-invariant. See page 22 for the definition of direct sum.

    1. (a) Prove that WN(T).

    2. (b) Show that if V is finite-dimensional, then W=N(T).

    3. (c) Show by example that the conclusion of (b) is not necessarily true if V is not finite-dimensional.

  5. Suppose that W is T-invariant. Prove that N(TW)=N(T)W and R(TW)=T(W).

  6. Prove Theorem 2.2 for the case that β is infinite, that is, R(T)=span({T(v):vβ}).

  7. Prove the following generalization of Theorem 2.6: Let V and W be vector spaces over a common field, and let β be a basis for V. Then for any function f:βW there exists exactly one linear transformation T:VW such that T(x)=f(x) for all xβ.

Exercises 36 and 37 require the definition of direct sum given on page 22.

  1. Let V be a finite-dimensional vector space and T:VV be linear.

    1. (a) Suppose that V=R(T)+N(T). Prove that V=R(T)N(T).

    2. (b) Suppose that R(T)N(T)={0}. Prove that V=R(T)N(T).

      Be careful to say in each part where finite-dimensionality is used.

  2. Let V and T be as defined in Exercise 21.

    1. (a) Prove that V=R(T)+N(T), but V is not a direct sum of these two spaces. Thus the result of Exercise 36(a) above cannot be proved without assuming that V is finite-dimensional.

    2. (b) Find a linear operator T1 on V such that R(T1)N(T1)={0} but V is not a direct sum of R(T1) and N(T1). Conclude that V being finite-dimensional is also essential in Exercise 36(b).

  3. A function T:VW between vector spaces V and W is called additive if T(x+y)=T(x)+T(y) for all x,yV. Prove that if V and W are vector spaces over the field of rational numbers, then any additive function from V into W is a linear transformation.

  4. Let T:CC be the function defined by T(z)=z¯. Prove that T is additive (as defined in Exercise 38) but not linear.

  5. Prove that there is an additive function T:RR (as defined in Exercise 38) that is not linear. Hint: Let V be the set of real numbers regarded as a vector space over the field of rational numbers. By the corollary to Theorem 1.13 (p. 61), V has a basis β. Let x and y be two distinct vectors in β, and define f:βV by f(x)=y,f(y)=x and f(z)=z otherwise. By Exercise 35, there exists a linear transformation T:VV such that T(u)=f(u) for all uβ. Then T is additive, but for c=y/x,T(cx)cT(x).

  6. Prove that Theorem 2.6 and its corollary are true when V is infinite-dimensional.

The following exercise requires familiarity with the definition of quotient space given in Exercise 31 of Section 1.3.

  1. Let V be a vector space and W be a subspace of V. Define the mapping η:VV/W by η(v)=v+W for vV.

    1. (a) Prove that η is a linear transformation from V onto V/W and that N(η)=W.

    2. (b) Suppose that V is finite-dimensional. Use (a) and the dimension theorem to derive a formula relating dim(V), dim(W), and dim(V/W).

    3. (c) Read the proof of the dimension theorem. Compare the method of solving (b) with the method of deriving the same result as outlined in Exercise 35 of Section 1.6.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.186.92