2.4 Invertibility and Isomorphisms

The concept of invertibility is introduced quite early in the study of functions. Fortunately, many of the intrinsic properties of functions are shared by their inverses. For example, in calculus we learn that the properties of being continuous or differentiable are generally retained by the inverse functions. We see in this section (Theorem 2.17) that the inverse of a linear transformation is also linear. This result greatly aids us in the study of inverses of matrices. As one might expect from Section 2.3, the inverse of the left-multiplication transformation LALA (when it exists) can be used to determine properties of the inverse of the matrix A.

In the remainder of this section, we apply many of the results about in- vertibility to the concept of isomorphism. We will see that finite-dimensional vector spaces (over F) of equal dimension may be identified. These ideas will be made precise shortly.

The facts about inverse functions presented in Appendix B are, of course, true for linear transformations. Nevertheless, we repeat some of the definitions for use in this section.

Definition.

Let V and W be vector spaces, and let T:VWT:VW be linear. A function U:WVU:WV is said to be an inverse of T if TU=IWTU=IW and UT=IVUT=IV. If T has an inverse, then T is said to be invertible. As noted in Appendix B, if T is invertible, then the inverse of T is unique and is denoted by T1T1.

The following facts hold for invertible functions T and U.

  1. (TU)1=U1T1(TU)1=U1T1.

  2. (T1)1=T(T1)1=T in particular, T1T1 is invertible.

    We often use the fact that a function is invertible if and only if it is both one-to-one and onto. We can therefore restate Theorem 2.5 as follows.

  3. Let T:VWT:VW be a linear transformation, where V and W are finite-dimensional spaces of equal dimension. Then T is invertible if and only if rank(T)=dim(V)rank(T)=dim(V).

Example 1

Let T:P1(R)R2T:P1(R)R2 be the linear transformation defined by T(a+bx)=(a, a+b)T(a+bx)=(a, a+b). The reader can verify directly that T1:R2P1(R)T1:R2P1(R) is defined by T1(c, d)=c+(dc)xT1(c, d)=c+(dc)x. Observe that T1T1 is also linear. As Theorem 2.17 demonstrates, this is true in general.

Theorem 2.17.

Let V and W be vector spaces, and let T:VWT:VW be linear and invertible. Then T1:WVT1:WV is linear.

Proof.

Let y1, y2Wy1, y2W and cFcF. Since T is onto and one-to-one, there exist unique vectors x1x1 and x2x2 such that T(x1)=y1T(x1)=y1 and T(x2)=y2T(x2)=y2. Thus x1=T1(y1)x1=T1(y1) and x2=T1(y2)x2=T1(y2) so

T1(cy1+y2)=T1[cT(x1)+T(x2)]=T1[T(cx1+x2)]=cx1+x2=cT1(y1)+T1(y2).
T1(cy1+y2)=T1[cT(x1)+T(x2)]=T1[T(cx1+x2)]=cx1+x2=cT1(y1)+T1(y2).

Corollary.

Let T be an invertible linear transformation from V to W. Then V is finite-dimensional if and only if W is finite-dimensional. In this case, dim(V)=dim(W)dim(V)=dim(W).

Proof.

Suppose that V is finite-dimensional. Let β={x1, x2, , xn}β={x1, x2, , xn} be a basis for V. By Theorem 2.2 (p. 68), T(β)T(β) spans R(T)=WR(T)=W hence W is finite-dimensional by Theorem 1.9 (p. 45). Conversely, if W is finite-dimensional, then so is V by a similar argument, using T1T1.

Now suppose that V and W are finite-dimensional. Because T is one-to-one and onto, we have

nullity(T)=0   and     rank(T)=dim(R(T))=dim(W).
nullity(T)=0   and     rank(T)=dim(R(T))=dim(W).

So by the dimension theorem (p. 70), it follows that dim(V)=dim(W)dim(V)=dim(W).

It now follows immediately from Theorem 2.5 (p. 71) that if T is a linear transformation between vector spaces of equal (finite) dimension, then the conditions of being invertible, one-to-one, and onto are all equivalent.

We are now ready to define the inverse of a matrix. The reader should note the analogy with the inverse of a linear transformation.

Definition.

Let A be an n×nn×n matrix. Then A is invertible if there exists an n×nn×n matrix B such that AB=BA=IAB=BA=I.

If A is invertible, then the matrix B such that AB=BA=IAB=BA=I is unique. (If C were another such matrix, then C=CI=C(AB)=(CA)B=IB=BC=CI=C(AB)=(CA)B=IB=B.) The matrix B is called the inverse of A and is denoted by A1A1.

Example 2

The reader should verify that the inverse of

(5723)        is        (3725).
(5273)        is        (3275).

In Section 3.2, we will learn a technique for computing the inverse of a matrix. At this point, we develop a number of results that relate the inverses of matrices to the inverses of linear transformations.

Theorem 2.18.

Let V and W be finite-dimensional vector spaces with ordered bases ββ and γγ, respectively. Let T:VWT:VW be linear. Then T is invertible if and only if [T]γβ[T]γβ is invertible. Furthermore, [T1]βγ=([T]γβ)1[T1]βγ=([T]γβ)1.

Proof.

Suppose that T is invertible. By the Corollary to Theorem 2.17, we have dim(V)=dim(W)dim(V)=dim(W). Let n=dim(V)n=dim(V). So [T]γβ[T]γβ is an n×nn×n matrix. Now T1:WVT1:WV satisfies TT1=IWTT1=IW and T1T=IVT1T=IV. Thus

In=[IV]β=[T1T]β=[T1]βγ[T]γβ.
In=[IV]β=[T1T]β=[T1]βγ[T]γβ.

Similarly, [T]γβ[T1]βγ=In[T]γβ[T1]βγ=In. So [T]γβ[T]γβ is invertible and ([T]γβ)1=[T1]βγ([T]γβ)1=[T1]βγ.

Now suppose that A=[T]γβA=[T]γβ is invertible. Then there exists an n×nn×n matrix B such that AB=BA=InAB=BA=In. By Theorem 2.6 (p. 73), there exists UL(W, V)UL(W, V) such that

U(wj)=ni=1Bijvi    for   j=1, 2, , n,
U(wj)=i=1nBijvi    for   j=1, 2, , n,

where γ={w1, w2, , wn}γ={w1, w2, , wn} and β={v1, v2, , vn}β={v1, v2, , vn}. It follows that [U]βγ=B[U]βγ=B. To show that U=T1U=T1 observe that

[UT]β=[U]βγ[T]γβ=BA=In=[IV]β
[UT]β=[U]βγ[T]γβ=BA=In=[IV]β

by Theorem 2.11 (p. 89). So UT=IVUT=IV, and similarly, TU=IWTU=IW

Example 3

Let ββ and γγ be the standard ordered bases of P1(R)P1(R) and R2R2, respectively. For T as in Example 1, we have

[T]γβ=(1011)      and    [T1]βγ=(1011).
[T]γβ=(1101)      and    [T1]βγ=(1101).

It can be verified by matrix multiplication that each matrix is the inverse of the other.

Corollary 1.

Let V be a finite-dimensional vector space with an ordered basis ββ, and let T:VVT:VV be linear. Then T is invertible if and only if [T]β[T]β is invertible. Furthermore, [T1]β=([T]β)1[T1]β=([T]β)1.

Proof.

Exercise.

Corollary 2.

Let A be an n×nn×n matrix. Then A is invertible if and only if LALA is invertible. Furthermore, (LA)1=LA1(LA)1=LA1.

Proof.

Exercise.

The notion of invertibility may be used to formalize what may already have been observed by the reader, that is, that certain vector spaces strongly resemble one another except for the form of their vectors. For example, in the case of M2×2(F)M2×2(F) and F4F4, if we associate to each matrix

(abcd)
(acbd)

the 4-tuple (a, b, c, d), we see that sums and scalar products associate in a similar manner; that is, in terms of the vector space structure, these two vector spaces may be considered identical or isomorphic.

Definitions.

Let V and W be vector spaces. We say that V is isomorphic to W if there exists a linear transformation T:VWT:VW that is invertible. Such a linear transformation is called an isomorphism from V onto W.

We leave as an exercise (see Exercise 13) the proof that “is isomorphic to” is an equivalence relation. (See Appendix A.) So we need only say that V and W are isomorphic.

Example 4

Define T:F2P1(F)T:F2P1(F) by T(a1, a2)=a1+a2xT(a1, a2)=a1+a2x. It is easily checked that T is an isomorphism; so F2F2 is isomorphic to P1(F)P1(F).

Example 5

Define

T:P3(R)M2×2(R)     by      T(f)=(f(1)f(2)f(3)f(4)).
T:P3(R)M2×2(R)     by      T(f)=(f(1)f(3)f(2)f(4)).

It is easily verified that T is linear. By use of the Lagrange interpolation formula in Section 1.6, it can be shown (compare with Exercise 22) that T(f)=OT(f)=O only when f is the zero polynomial. Thus T is one-to-one (see Exercise 11). Moreover, because dim(P3(R))=dim(M2×2(R))dim(P3(R))=dim(M2×2(R)), it follows that T is invertible by Theorem 2.5 (p. 71). We conclude that P3(R)P3(R) is isomorphic to M2×2(R)M2×2(R).

In each of Examples 4 and 5, the reader may have observed that isomor-phic vector spaces have equal dimensions. As the next theorem shows, this is no coincidence.

Theorem 2.19.

Let V and W be finite-dimensional vector spaces (over the same field). Then V is isomorphic to W if and only if dim(V)=dim(W)dim(V)=dim(W).

Proof.

Suppose that V is isomorphic to W and that T:VWT:VW is an isomorphism from V to W. By the lemma preceding Theorem 2.18, we have that dim(V)=dim(W)dim(V)=dim(W).

Now suppose that dim(V)=dim(W)dim(V)=dim(W), and let β={v1, v2, , vn}β={v1, v2, , vn} and γ={w1, w2, , wn}γ={w1, w2, , wn} be bases for V and W, respectively. By Theorem 2.6 (p. 73), there exists T:VWT:VW such that T is linear and T(vi)=wiT(vi)=wi for i=1, 2, , ni=1, 2, , n Using Theorem 2.2 (p. 68), we have

R(T)= span(T(β))= span(γ)=W.
R(T)= span(T(β))= span(γ)=W.

So T is onto. From Theorem 2.5 (p. 71), we have that T is also one-to-one. Hence T is an isomorphism.

By the lemma to Theorem 2.18, if V and W are isomorphic, then either both of V and W are finite-dimensional or both are infinite-dimensional.

Corollary.

Let V be a vector space over F. Then V is isomorphic to FnFn if and only if dim(V)=ndim(V)=n.

Up to this point, we have associated linear transformations with their matrix representations. We are now in a position to prove that, as a vector space, the collection of all linear transformations between two given vector spaces may be identified with the appropriate vector space of m×nm×n matrices.

Theorem 2.20.

Let V and W be finite-dimensional vector spaces over F of dimensions n and m, respectively, and let ββ and γγ be ordered bases for V and W, respectively. Then the function Φγβ:L(V, W)Mm×n(F)Φγβ:L(V, W)Mm×n(F), defined by Φγβ(T)=[T]γβΦγβ(T)=[T]γβ for TL(V, W)TL(V, W), is an isomorphism.

Proof.

By Theorem 2.8 (p. 83), ΦγβΦγβ is linear. Hence we must show that ΦγβΦγβ is one-to-one and onto. This is accomplished if we show that for every m×nm×n matrix A, there exists a unique linear transformation T:VWT:VW such that Φγβ(T)=AΦγβ(T)=A. Let β={v1, v2, , vn}, γ={w1, w2, , wm}β={v1, v2, , vn}, γ={w1, w2, , wm}, and let A be a given m×nm×n matrix. By Theorem 2.6 (p. 73), there exists a unique linear transformation T:VWT:VW such that

T(vj)=mi=1Aijwi    for  1jn.
T(vj)=i=1mAijwi    for  1jn.

But this means that [T]γβ=A,   or  Φγβ(T)=A[T]γβ=A,   or  Φγβ(T)=A. So ΦγβΦγβ is an isomorphism.

Corollary.

Let V and W be finite-dimensional vector spaces of dimensions n and m, respectively. Then L(V, W)L(V, W) is finite-dimensional of dimension mn.

Proof.

The proof follows from Theorems 2.20 and 2.19 and the fact that dim(Mm×n(F))=mndim(Mm×n(F))=mn.

We conclude this section with a result that allows us to see more clearly the relationship between linear transformations defined on abstract finite- dimensional vector spaces and linear transformations from FnFn to FmFm.

We begin by naming the transformation x[x]βx[x]β introduced in Section 2.2.

Definition.

Let ββ be an ordered basis for an n-dimensional vector space V over the field F. The standard representation of V with respect to ββ is the function ϕβ:VFnϕβ:VFn defined by ϕβ(x)=[x]βϕβ(x)=[x]β for each xVxV.

Example 6

Let β={(1, 0), (0, 1)}β={(1, 0), (0, 1)} and γ={(1, 2), (3, 4)}γ={(1, 2), (3, 4)}. It is easily observed that ββ and γγ are ordered bases for R2R2. For x=(1, 2)x=(1, 2), we have

ϕβ(x)=[x]β=(12)    and   ϕγ(x)=[x]γ=(52).
ϕβ(x)=[x]β=(12)    and   ϕγ(x)=[x]γ=(52).

We observed earlier that ϕβϕβ is a linear transformation. The next theorem tells us much more.

Theorem 2.21.

For any finite-dimensional vector space V with ordered basis β, ϕββ, ϕβ is an isomorphism.

Proof.

Exercise.

This theorem provides us with an alternate proof that an n-dimensional vector space is isomorphic to FnFn (see the corollary to Theorem 2.19).

Let V and W be vector spaces of dimension n and m, respectively, and let T:VWT:VW be a linear transformation. Define A=[T]γβA=[T]γβ, where ββ and γγ are arbitrary ordered bases of V and W, respectively. We are now able to use ϕβϕβ and ϕγϕγ to study the relationship between the linear transformations T and LA:FnFmLA:FnFm.

Let us first consider Figure 2.2. Notice that there are two composites of linear transformations that map V into FmFm:

  1. Map V into FnFn with ϕβϕβ and follow this transformation with LALA this yields the composite LAϕβLAϕβ.

  2. Map V into W with T and follow it by ϕγϕγ to obtain the composite ϕγTϕγT.

A rectangular diagram of two composites of linear transformations that map V into F.

Figure 2.2

These two composites are depicted by the dashed arrows in the diagram. By a simple reformulation of Theorem 2.14 (p. 92), we may conclude that

LAϕβ=ϕγT;
LAϕβ=ϕγT;

that is, the diagram “commutes.” Heuristically, this relationship indicates that after V and W are identified with FnFn and FmFm via ϕβϕβ and ϕγϕγ, respectively, we may “identify” T with LALA. This diagram allows us to transfer operations on abstract vector spaces to ones on FnFn and FmFm.

Example 7

Recall the linear transformation T:P3(R)P2(R)T:P3(R)P2(R) defined in Example 4 of Section 2.2 (T(f(x))=f(x))(T(f(x))=f(x)). Let ββ and γγ be the standard ordered bases for P3(R)P3(R) and P2(R)P2(R), respectively, and let ϕβ:P3(R)R4ϕβ:P3(R)R4 and ϕγ:P2(R)R3ϕγ:P2(R)R3 be the corresponding standard representations of P3(R)P3(R) and P2(R)P2(R). If A=[T]γβA=[T]γβ, then

A=(010000200003).
A=000100020003.

Consider the polynomial p(x)=2+x3x2+5x3p(x)=2+x3x2+5x3. We show that LAϕβ(p(x))=ϕγT(p(x))LAϕβ(p(x))=ϕγT(p(x)). Now

LAϕβ(p(x))=(010000200003)(2135)=(1615).
LAϕβ(p(x))=0001000200032135=1615.

But since T(p(x))=p(x)=16x+15x2T(p(x))=p(x)=16x+15x2, we have

ϕγT(p(x))=(1615).
ϕγT(p(x))=1615.

So LAϕβ(p(x))=ϕγT(p(x))LAϕβ(p(x))=ϕγT(p(x)).

Try repeating Example 7 with different polynomials p(x).

Exercises

  1. Label the following statements as true or false. In each part, V and W are vector spaces with ordered (finite) bases αα and ββ, respectively, T:VWT:VW is linear, and A and B are matrices.

    1. (a) ([T]βα)1=[T1]βα([T]βα)1=[T1]βα

    2. (b) T is invertible if and only if T is one-to-one and onto.

    3. (c) T=LA, T=LA, where A=[T]βαA=[T]βα

    4. (d) M2×3(F)M2×3(F) is isomorphic to F5F5.

    5. (e) Pn(F)Pn(F) is isomorphic to Pm(F)Pm(F) if and only if n=mn=m.

    6. (f) AB=IAB=I implies that A and B are invertible.

    7. (g) If A is invertible, then (A1)1=A(A1)1=A.

    8. (h) A is invertible if and only if LALA is invertible.

    9. (i) A must be square in order to possess an inverse.

  2. For each of the following linear transformations T, determine whether T is invertible and justify your answer.

    1. (a) T:R2R3T:R2R3 defined by T(a1, a2)=(a12a2, a2, 3a1+4a2)T(a1, a2)=(a12a2, a2, 3a1+4a2).

    2. (b) T:R2R3T:R2R3 defined by T(a1, a2)=(3a1a2, a2, 4a1)T(a1, a2)=(3a1a2, a2, 4a1).

    3. (c) T:R3R3T:R3R3 defined by T(a1, a2, a3)=(3a12a3, a2, 3a1+4a2)T(a1, a2, a3)=(3a12a3, a2, 3a1+4a2).

    4. (d) T:P3(R)P2(R)T:P3(R)P2(R) defined by T(p(x))=p(x)T(p(x))=p(x).

    5. (e) T:M2×2(R)P2(R)T:M2×2(R)P2(R) defined by T(abcd)=a+2bx+(c+d)x2T(acbd)=a+2bx+(c+d)x2.

    6. (f) T:M2×2(R)M2×2(R)T:M2×2(R)M2×2(R) defined by T(abcd)=(a+bacc+d)T(acbd)=(a+bcac+d).

  3. Which of the following pairs of vector spaces are isomorphic? Justify your answers.

    1. (a) F3F3 and P3(F)P3(F).

    2. (b) F4F4 and P3(F)P3(F).

    3. (c) M2×2(R)M2×2(R) and P3(R)P3(R).

    4. (d) V={AM2×2(R):tr(A)=0}V={AM2×2(R):tr(A)=0} and R4R4.

  4. Let A and B be n×nn×n invertible matrices. Prove that AB is invertible and (AB)1=B1A1(AB)1=B1A1.

  5. Let A be invertible. Prove that AtAt is invertible and (At)1=(A1)t(At)1=(A1)t. Visit goo.gl/suFm6V for a solution.

  6. Prove that if A is invertible and AB=OAB=O, then B=OB=O.

  7. Let A be an n×nn×n matrix.

    1. (a) Suppose that A2=OA2=O. Prove that A is not invertible.

    2. (b) Suppose that AB=OAB=O for some nonzero n×nn×n matrix B. Could A be invertible? Explain.

  8. Let A and B be n×nn×n matrices such that AB is invertible.

    1. (a) Prove that A and B are invertible. Hint: See Exercise 12 of Section 2.3.

    2. (b) Give an example to show that a product of nonsquare matrices can be invertible even though the factors, by definition, are not.

  9. Let A and B be n×nn×n matrices such that AB=InAB=In.

    1. (a) Use Exercise 9 to conclude that A and B are invertible.

    2. (b) Prove A=B1A=B1(and hence B=A1B=A1). (We are, in effect, saying that for square matrices, a “one-sided” inverse is a “two-sided” inverse.)

    3. (c) State and prove analogous results for linear transformations defined on finite-dimensional vector spaces.

  10. Verify that the transformation in Example 5 is one-to-one.

  11. Let mean “is isomorphic to.” Prove that is an equivalence relation on the class of vector spaces over F.

  12. Let

    V={(aa+b0c):a, b, cF}.
    V={(a0a+bc):a, b, cF}.

    Construct an isomorphism from V to F3F3.

  13. Let V and W be n-dimensional vector spaces, and let T:VWT:VW be a linear transformation. Suppose that ββ is a basis for V. Prove that T is an isomorphism if and only if T(β)T(β) is a basis for W.

  14. Let B be an n×nn×n invertible matrix. Define Φ:Mn×n(F)Mn×n(F)Φ:Mn×n(F)Mn×n(F) by Φ(A)=B1ABΦ(A)=B1AB. Prove that ΦΦ is an isomorphism.

  15. Let V and W be finite-dimensional vector spaces and T:VWT:VW be an isomorphism. Let V0V0 be a subspace of V.

    1. (a) Prove that T(V0)T(V0) is a subspace of W.

    2. (b) Prove that dim(V0)=dim(T(V0))dim(V0)=dim(T(V0)).

  16. Repeat Example 7 with the polynomial p(x)=1+x+2x2+x3p(x)=1+x+2x2+x3.

  17. In Example 5 of Section 2.1, the mapping T:M2×2(R)M2×2(R)T:M2×2(R)M2×2(R) defined by T(M)=MtT(M)=Mt for each MM2×2(R)MM2×2(R) is a linear transformation. Let β={E11, E12, E21, E22}β={E11, E12, E21, E22}, which is a basis for M2×2(R)M2×2(R), as noted in Example 3 of Section 1.6.

    1. (a) Compute [T]β[T]β.

    2. (b) Verify that LAϕβ(M)=ϕβT(M)LAϕβ(M)=ϕβT(M) for A=[T]βA=[T]β and

      M=(1234).
      M=(1324).
  18. Let T:VWT:VW be a linear transformation from an n-dimensional vector space V to an m-dimensional vector space W. Let ββ and γγ be ordered bases for V and W, respectively. Prove that rank(T)=rank(LA)rank(T)=rank(LA) and that nullity(T)=nullity(LA)nullity(T)=nullity(LA), where A=[T]γβA=[T]γβ. Hint: Apply Exercise 17 to Figure 2.2.

  19. Let V and W be finite-dimensional vector spaces with ordered bases β={v1, v2, , vn}β={v1, v2, , vn} and γ={w1, w2, , wm}γ={w1, w2, , wm}, respectively. By Theorem 2.6 (p. 73), there exist linear transformations Tij:VWTij:VW such that

    Tij(vk)={wi   if  k=j0     if  kj.
    Tij(vk)={wi   if  k=j0     if  kj.

    First prove that {Tij:1im, 1jn}{Tij:1im, 1jn} is a basis for L(V, W)L(V, W). Then let MijMij be the m×nm×n matrix with 1 in the ith row and jth column and 0 elsewhere, and prove that [Tij]γβ=Mij[Tij]γβ=Mij. Again by Theorem 2.6, there exists a linear transformation Φγβ:L(V, W)Mm×n(F)Φγβ:L(V, W)Mm×n(F) such that Φγβ(Tij)=MijΦγβ(Tij)=Mij. Prove that ΦγβΦγβ is an isomorphism.

  20. Let c0, c1, , cnc0, c1, , cn be distinct scalars from an infinite field F. Define T:Pn(F)Fn+1T:Pn(F)Fn+1 by T(f)=(f(c0), f(c1), , f(cn))T(f)=(f(c0), f(c1), , f(cn)). Prove that T is an isomorphism. Hint: Use the Lagrange polynomials associated with c0, c1, , cnc0, c1, , cn

  21. Let W denote the vector space of all sequences in F that have only a finite number of nonzero terms (defined in Exercise 18 of Section 1.6), and let Z=P(F)Z=P(F). Define

    T:WZ      by     T(σ)=ni=0σ(i)xi,
    T:WZ      by     T(σ)=i=0nσ(i)xi,

    where n is the largest integer such that σ(n)0σ(n)0. Prove that T is an isomorphism.

The following exercise requires familiarity with the concept of quotient space defined in Exercise 31 of Section 1.3 and with Exercise 42 of Section 2.1.

  1. Let V and Z be vector spaces and T:VZT:VZ be a linear transformation that is onto. Define the mapping

    ˉT:V/N(T)Z     by    ˉT(v+N(T))=T(v)
    T¯¯¯:V/N(T)Z     by    T¯¯¯(v+N(T))=T(v)

    for any coset v+N(T)v+N(T) in V/N(T)V/N(T).

    1. (a) Prove that ˉTT¯¯¯ is well-defined; that is, prove that if v+N(T)=v+N(T)v+N(T)=v+N(T), then T(v)=T(v)T(v)=T(v).

    2. (b) Prove that ˉTT¯¯¯ is linear.

    3. (c) Prove that ˉTT¯¯¯ is an isomorphism.

    4. (d) Prove that the diagram shown in Figure 2.3 commutes; that is, prove that T=ˉTηT=T¯¯¯η.

      A diagram of mapping of T.

      Figure 2.3

  2. Let V be a nonzero vector space over a field F, and suppose that S is a basis for V. (By the corollary to Theorem 1.13 (p. 61) in Section 1.7, every vector space has a basis.) Let C(S, F) denote the vector space of all functions fF(S, F)fF(S, F) such that f(s)=0f(s)=0 for all but a finite number of vectors in S. (See Exercise 14 of Section 1.3.) Let Ψ:C(S, F)VΨ:C(S, F)V be defined by Ψ(f)=0Ψ(f)=0 if f is the zero function, and

    Ψ(f)=sS, f(s)0f(s)s,
    Ψ(f)=sS, f(s)0f(s)s,

    otherwise. Prove that ΨΨ is an isomorphism. Thus every nonzero vector space can be viewed as a space of functions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.123.73