2.6* Dual Spaces

In this section, we are concerned exclusively with linear transformations from a vector space V into its field of scalars F, which is itself a vector space of dimension 1 over F. Such a linear transformation is called a linear functional on V. We generally use the letters f, g, h, ... to denote linear functionals. As we see in Example 1, the definite integral provides us with one of the most important examples of a linear functional in mathematics.

Example 1

Let V be the vector space of continuous real-valued functions on the interval [0, 2π]. Fix a function gV. The function h:VR defined by

h(x)=12π02πx(t)g(t)dt

is a linear functional on V. In the cases that g(t) equals sin nt or cos nt, h(x) is often called the nth Fourier coefficient of x.

Example 2

Let V=Mn×n(F), and define f:VF by f(A)=tr(A), the trace of A. By Exercise 6 of Section 1.3, we have that f is a linear functional.

Example 3

Let V be a finite-dimensional vector space, and let β={x1, x2, , xn} be an ordered basis for V. For each i=1, 2, , n, define fi(x)=ai, where

[x]β=(a1a2an)

is the coordinate vector of x relative to β. Then fi is a linear functional on V called the ith coordinate function with respect to the basis β. Note that fi(xj)=δij, where δij is the Kronecker delta. These linear functionals play an important role in the theory of dual spaces (see Theorem 2.24).

Definition.

For a vector space V over F, we define the dual space of V to be the vector space L(V, F), denoted by V*.

Thus V* is the vector space consisting of all linear functionals on V with the operations of addition and scalar multiplication as defined in Section 2.2. Note that if V is finite-dimensional, then by the corollary to Theorem 2.20 (p. 105)

dim(V*)=dim(L(V, F))=dim(V)dim(F)=dim(V).

Hence by Theorem 2.19 (p. 104), V and V* are isomorphic. We also define the double dual V** of V to be the dual of V*. In Theorem 2.26, we show, in fact, that there is a natural identification of V and V** in the case that V is finite-dimensional.

Theorem 2.24.

Suppose that V is a finite-dimensional vector space with the ordered basis β={x1, x2, , xn}. Let fi(1in) be the ith coordinate function with respect to β as just defined, and let β*={f1, f2, , fn}. Then β* is an ordered basis for V*, and, for any fV*, we have

f=i=1nf(xi)fi.

Proof.

Let fV*. Since dim(V*)=n, we need only show that

f=i=1nf(xi)fi,

from which it follows that β* generates V*, and hence is a basis by Corollary 2(a) to the replacement theorem (p. 48). Let

g=i=1nf(xi)fi.

For 1jn, we have

g(xj)=(i=1nf(xi)fi)(xj)=i=1nf(xi)fi(xj)            = i=1nf(xi)δij=f(xj).

Therefore f=g by the corollary to Theorem 2.6 (p. 73).

Definition.

Using the notation of Theorem 2.24, we call the ordered basis β*={f1, f2, , fn} of V* that satisfies fi(xj)=δij(1i, jn) the dual basis of β.

Example 4

Let β={(2, 1), (3, 1)} be an ordered basis for R2. Suppose that the dual basis of β is given by β*={f1, f2}. To explicitly determine a formula for f1, we need to consider the equations

1=f1(2, 1)=f1(2e1+e2)=2f1(e1)+f1(e2)0=f1(3, 1)=f1(3e1+e2)=3f1(e1)+f1(e2).

Solving these equations, we obtain f1(e1)=1 and f1(e2)=3; that is, f1(x, y)=x+3y. Similarly, it can be shown that f2(x, y)=x2y.

Now assume that V and W are n- and m-dimensional vector spaces over F with ordered bases β and γ, respectively. In Section 2.4, we proved that there is a one-to-one correspondence between linear transformations T:VW and m×n matrices (over F) via the correspondence T[T]βγ. For a matrix of the form A=[T]βγ, the question arises as to whether or not there exists a linear transformation U associated with T in some natural way such that U may be represented in some basis as At. Of course, if mn, it would be impossible for U to be a linear transformation from V into W. We now answer this question by applying what we have already learned about dual spaces.

Theorem 2.25.

Let V and W be finite-dimensional vector spaces over F with ordered bases β and γ, respectively. For any linear transformation T:VW, the mapping Tt:W*V* defined by Tt(g)=gT for all gW* is a linear transformation with the property that [Tt]γ*β*=([T]βγ)t.

Proof.

For gW*, it is clear that Tt(g)=gT is a linear functional on V and hence is in V*. Thus Tt maps W* into V*. We leave the proof that Tt is linear to the reader.

To complete the proof, let β={x1, x2, , xn} and γ={y1, y2, , ym} with dual bases β*={f1, f2, , fn} and γ*={g1, g2, , gm}, respectively. For convenience, let A=[T]βγ. To find the jth column of [Tt]γ*β*, we begin by expressing Tt(gj) as a linear combination of the vectors of β*. By Theorem 2.24, we have

Tt(gj)=gjT=s=1n(gjT)(xs)fs.

So the row i, column j entry of [Tt]γ*β* is

(gjT)(xi)=gj(T(xi))=gj(k=1mAkiyk)                                             =k=1mAkigj(yk)=k=1mAkiδjk=Aji.

Hence [Tt]γ*β*=At.

The linear transformation Tt defined in Theorem 2.25 is called the transpose of T. It is clear that Tt is the unique linear transformation U such that [U]γ*β*=([T]βγ)t.

We illustrate Theorem 2.25 with the next example.

Example 5

Define T:P1(R)R2 by T(p(x))=(p(0), p(2)). Let β and γ be the standard ordered bases for P1(R) and R2, respectively. Clearly,

[T]βγ=(1012).

We compute [Tt]γ*β* directly from the definition. Let β*={f1, f2} and γ*={g1, g2}. Suppose that [Tt]γ*β*=(abcd). Then Tt(g1)=af1+cf2. So

Tt(g1)(1)=(af1+cf2)(1)=af1(1)+cf2(1)=a(1)+c(0)=a.

But also

(Tt(g1))(1)=g1(T(1))=g1(1, 1)=1.

So a=1. Using similar computations, we obtain that c=0, b=1, and d=2. Hence a direct computation yields

[Tt]γ*β*=(1102)=([T]βγ)t,

as predicted by Theorem 2.25.

We now concern ourselves with demonstrating that any finite-dimensional vector space V can be identified in a natural way with its double dual V**. There is, in fact, an isomorphism between V and V** that does not depend on any choice of bases for the two vector spaces.

For a vector xV, we define x^:V*F by x^(f)=f(x) for every fV*. It is easy to verify that x^ is a linear functional on V*, so x^V**. The correspondence xx^ allows us to define the desired isomorphism between V and V**.

Lemma.

Let V be a finite-dimensional vector space, and let xV. If x^(f)=0 for all fV*, then x=0.

Proof.

Let x0. We show that there exists fV* such that x^(f)0. Choose an ordered basis β={x1, x2, , xn} for V such that x1=x. Let {f1, f2, , fn} be the dual basis of β. Then f1(x1)=10. Let f=f1.

Theorem 2.26.

Let V be a finite-dimensional vector space, and define ψ:VV** by ψ(x)=x^. Then ψ is an isomorphism.

Proof.

  1. (a) ψ is linear: Let x, yV  and  cF. For fV*, we have

    ψ(cx+y)(f)=f(cx+y)=cf(x)+f(y)=cx^(f)+y^(f)                         =(cx^+y^)(f).

    Therefore

    ψ(cx+y)=cx^+y^=cψ(x)+ψ(y).
  2. (b) ψ is one-to-one: Suppose that ψ(x) is the zero functional on V* for some xV. Then x^(f)=0 for every fV*. By the previous lemma, we conclude that x=0.

  3. (c) ψ is an isomorphism: This follows from (b) and the fact that dim(V)=dim(V**).

Corollary.

Let V be a finite-dimensional vector space with dual space V*. Then every ordered basis for V* is the dual basis for some basis for V.

Proof.

Let {f1, f2, , fn} be an ordered basis for V*. We may combine Theorems 2.24 and 2.26 to conclude that for this basis for V* there exists a dual basis {x1^, x2^, , xn^} in V**, that is, δij=xi^(fj)=fj(xi) for all i and j. Thus {f1, f2, , fn} is the dual basis of {x1, x2, , xn}.

Although many of the ideas of this section (e.g., the existence of a dual space) can be extended to the case where V is not finite-dimensional, only a finite-dimensional vector space is isomorphic to its double dual via the map xx^. In fact, for infinite-dimensional vector spaces, no two of V, V*, and V** are isomorphic.

Exercises

  1. Label the following statements as true or false. Assume that all vector spaces are finite-dimensional.

    1. (a) Every linear transformation is a linear functional.

    2. (b) A linear functional defined on a field may be represented as a 1×1 matrix.

    3. (c) Every vector space is isomorphic to its dual space.

    4. (d) Every vector space is isomorphic to the dual of some vector space.

    5. (e) If T is an isomorphism from V onto V* and β is a finite ordered basis for V, then T(β)=β*.

    6. (f) If T is a linear transformation from V to W, then the domain of (Tt)t is V**.

    7. (g) If V is isomorphic to W, then V* is isomorphic to W*.

    8. (h) The derivative of a function may be considered as a linear functional on the vector space of differentiable functions.

  2. For the following functions f on a vector space V, determine which are linear functionals.

    1. (a) V=P(R);f(p(x))=2p(0)+p(1) where′ denotes differentiation

    2. (b) V=R2;f(x, y)=(2x, 4y)

    3. (c) V=M2×2(F);f(A)=tr(A)

    4. (d) V=R3;f(x, y, z)=x2+y2+z2

    5. (e) V=P(R);f(p(x))=01p(t) dt

    6. (f) V=M2×2(F);f(A)=A11

  3. For each of the following vector spaces V and bases β, find explicit formulas for vectors of the dual basis β* for V*, as in Example 4.

    1. (a) V=R3;β={(1, 0, 1), (1, 2, 1), (0, 0, 1)}

    2. (b) V=P2(R);β={1, x, x2}

  4. Let V=R3, and define f1, f2, f3V* as follows:

    f1(x, y, z)=x2y,    f2(x, y, z)=x+y+z,     f3(x, y, z)=y3z.

    Prove that {f1, f2, f3} is a basis for V*, and then find a basis for V for which it is the dual basis.

  5. Let V=P1(R), and, for p(x)V, define f1, f2V* by

    f1(p(x))=01p(t)dt    and    f2(p(x))=02p(t)dt.

    Prove that {f1, f2} is a basis for V*, and find a basis for V for which it is the dual basis.

  6. Define f(R2)* by f(x, y)=2x+y and T:R2R2 by T(x, y)=(3x+2y, x).

    1. (a) Compute Tt(f).

    2. (b) Compute [Tt]β*, where β is the standard ordered basis for R2 and β*={f1, f2} is the dual basis, by finding scalars a, b, c, and d such that Tt(f1)=af1+cf2 and Tt(f2)=bf1+df2.

    3. (c) Compute [T]β and ([T]β)t, and compare your results with (b).

  7. Let V=P1(R) and W=R2 with respective standard ordered bases β and γ. Define T:VW by

    T(p(x))=(p(0)2p(1), p(0)+p(0)),

    where p(x) is the derivative of p(x).

    1. (a) For fW* defined by f(a, b)=a2b, compute Tt(f).

    2. (b) Compute [Tt]γ*β* without appealing to Theorem 2.25.

    3. (c) Compute [T]βγ and its transpose, and compare your results with (b).

  8. Let {u, v} be a linearly independent set in R3. Show that the plane {su+tv:s, tR} through the origin in R3 may be identified with the null space of a vector in (R3)*.

  9. Prove that a function T:FnFm is linear if and only if there exist f1, f2, ,fm(Fn)* such that T(x)=(f1(x), f2(x), , fm(x)) for all xFn. Hint: If T is linear, define fi(x)=(giT)(x) for xFn; that is, fi=Tt(gi) for 1im, where {g1, g2, , gm} is the dual basis of the standard ordered basis for Fm.

  10. Let V=Pn(F), and let c0, c1, , cn be distinct scalars in F.

    1. (a) For 0in, define fiV* by fi(p(x))=p(ci). Prove that {f0, f1, , fn} is a basis for V*. Hint: Apply any linear combination of this set that equals the zero transformation to p(x)=(xc1)(xc2)(xcn), and deduce that the first coefficient is zero.

    2. (b) Use the corollary to Theorem 2.26 and (a) to show that there exist unique polynomials p0(x), p1(x), , pn(x) such that pi(cj)=δij for 0in. These polynomials are the Lagrange polynomials defined in Section 1.6.

    3. (c) For any scalars a0, a1, , an (not necessarily distinct), deduce that there exists a unique polynomial q(x) of degree at most n such that q(ci)=ai for 0in. In fact,

      q(x)=i=0naipi(x).
    4. (d) Deduce the Lagrange interpolation formula:

      p(x)=i=0np(ci)pi(x)

      for any p(x)V.

    5. (e) Prove that

      abp(t)dt=i=0np(ci)di,

      where

      di=abpi(t)dt.

      Suppose now that

      ci=a+i(ba)n   for  i=0, 1, , n.

      For n=1, the preceding result yields the trapezoidal rule for evaluating the definite integral of a polynomial. For n=2, this result yields Simpson’s rule for evaluating the definite integral of a polynomial.

  11. Let V and W be finite-dimensional vector spaces over F, and let ψ1 and ψ2 be the isomorphisms between V and V** and W and W**, respectively, as defined in Theorem 2.26. Let T:VW be linear, and define Ttt=(Tt)t. Prove that the diagram depicted in Figure 2.6 commutes (i.e., prove that ψ2T=Tttψ1). Visit goo.gl/Lkd6XZ for a solution.

    A square diagram of a linear map T.

    Figure 2.6

  12. Let V be a finite-dimensional vector space with the ordered basis β. Prove that ψ(β)=β**, where ψ is defined in Theorem 2.26.

In Exercises 13 through 17, V denotes a finite-dimensional vector space over F. For every subset S of V, define the annihilator S0 of S as

S0={fV*:f(x)=0 for all xS}.
    1. (a) Prove that S0 is a subspace of V*.

    2. (b) If W is a subspace of V and xW, prove that there exists fW0 such that f(x)0.

    3. (c) Prove that (S0)0=span(ψ(S)), where ψ is defined as in Theorem 2.26.

    4. (d) For subspaces W1 and W2 , prove that W1=W2 if and only if W10=W20.

    5. (e) For subspaces W1 and W2, show that (W1+W2)0=W10W20.

  1. Prove that if W is a subspace of V, then dim(W)+dim(W0)=dim(V). Hint: Extend an ordered basis {x1, x2, , xk} of W to an ordered basis β={x1, x2, , xn} of V. Let β*={f1, f2, , fn}. Prove that {fk+1, fk+2, , fn} is a basis for W0.

  2. Suppose that W is a finite-dimensional vector space and that T:VW is linear. Prove that N(Tt)=(R(T))0.

  3. Use Exercises 14 and 15 to deduce that rank(LAt)=rank(LA) for any AMm×n(F).

In Exercises 17 through 20, assume that V and W are finite-dimensional vector spaces. (It can be shown, however, that these exercises are true for all vector spaces V and W.)

  1. Let T be a linear operator on V, and let W be a subspace of V. Prove that W is T-invariant (as defined in the exercises of Section 2.1) if and only if W0 is Tt-invariant.

  2. Let V be a nonzero vector space over a field F, and let S be a basis for V. (By the corollary to Theorem 1.13 (p. 61) in Section 1.7, every vector space has a basis.) Let Φ:V*F(S, F) be the mapping defined by Φ(f)=fS, the restriction of f to S. Prove that Φ is an isomorphism. Hint: Apply Exercise 35 of Section 2.1.

  3. Let V be a nonzero vector space, and let W be a proper subspace of V (i.e., WV).

    1. (a) Let gW* and vV with vW. Prove that for any scalar a there exists a function fV* such that f(v)=a and f(x)=g(x) for all x in W. Hint: For the infinite-dimensional case, use Exercise 4 of Section 1.7 and Exercise 35 of Section 2.1.

    2. (b) Use (a) to prove there exists a nonzero linear functional fV* such that f(x)=0 for all xW.

  4. Let V and W be nonzero vector spaces over the same field, and let T:VW be a linear transformation.

    1. (a) Prove that T is onto if and only if Tt is one-to-one.

    2. (b) Prove that Tt is onto if and only if T is one-to-one.

    Hint: In the infinite-dimensional case, use Exercise 19 for parts of the proof.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.235.8