6.2 The Gram-Schmidt Orthogonalization Process and Orthogonal Complements

In previous chapters, we have seen the special role of the standard ordered bases for CnCn and RnRn. The special properties of these bases stem from the fact that the basis vectors form an orthonormal set. Just as bases are the building blocks of vector spaces, bases that are also orthonormal sets are the building blocks of inner product spaces. We now name such bases.

Definition.

Let V be an inner product space. A subset of V is an orthonormal basis for V if it is an ordered basis that is orthonormal.

Example 1

The standard ordered basis for FnFn is an orthonormal basis for FnFn.

Example 2

The set

{(15, 25), (25, 15)}
{(15, 25), (25, 15)}

is an orthonormal basis for R2R2.

The next theorem and its corollaries illustrate why orthonormal sets and, in particular, orthonormal bases are so important.

Theorem 6.3.

Let V be an inner product space and S={v1, v2, , vk}S={v1, v2, , vk} be an orthogonal subset of V consisting of nonzero vectors. If yspan(S)yspan(S), then

y=ki=1y, vi||vi||2vi.
y=i=1ky, vi||vi||2vi.

Proof.

Write y=ki=1aiviy=i=1kaivi, where a1, a2, , akFa1, a2, , akF. Then, for 1jk1jk,

we have

y, vj=ki=1aivi, vj=ki=1aivi, vj=ajvj, vj=aj||vj||2.
y, vj=i=1kaivi, vj=i=1kaivi, vj=ajvj, vj=aj||vj||2.

So aj=y, vj||vj||2aj=y, vj||vj||2, and the result follows.

The next corollary follows immediately from Theorem 6.3.

Corollary 1.

If, in addition to the hypotheses of Theorem 6.3, S is orthonormal and yspan(S)yspan(S), then

y=ki=1y, vivi.
y=i=1ky, vivi.

If V possesses a finite orthonormal basis, then Corollary 1 allows us to compute the coefficients in a linear combination very easily. (See Example 3.)

Corollary 2.

Let V be an inner product space, and let S be an orthogonal subset of V consisting of nonzero vectors. Then S is linearly independent.

Proof.

Suppose that v1, v2, , vkSv1, v2, , vkS and

ki=1aivi=0.
i=1kaivi=0.

As in the proof of Theorem 6.3 with y=0y=0, we have aj=0, vj/||vj||2=0aj=0, vj/||vj||2=0 for all j. So S is linearly independent.

Example 3

By Corollary 2, the orthonormal set

{12(1, 1, 0), 13(1, 1, 1), 16(1, 1, 2)}
{12(1, 1, 0), 13(1, 1, 1), 16(1, 1, 2)}

obtained in Example 8 of Section 6.1 is an orthonormal basis for R3R3. Let x=(2, 1, 3)x=(2, 1, 3). The coefficients given by Corollary 1 to Theorem 6.3 that express x as a linear combination of the basis vectors are

a1=12(2+1)=32,a2=13(21+3)=43,
a1=12(2+1)=32,a2=13(21+3)=43,

and

a3=16(2+1+6)=56.
a3=16(2+1+6)=56.

As a check, we have

(2, 1, 3)=32(1, 1, 0)+43(1, 1, 1)+56(1, 1, 2).
(2, 1, 3)=32(1, 1, 0)+43(1, 1, 1)+56(1, 1, 2).

Corollary 2 tells us that the vector space H in Section 6.1 contains an infinite linearly independent set, and hence H is not a finite-dimensional vector space.

Of course, we have not yet shown that every finite-dimensional inner product space possesses an orthonormal basis. The next theorem takes us most of the way in obtaining this result. It tells us how to construct an orthogonal set from a linearly independent set of vectors in such a way that both sets generate the same subspace.

Before stating this theorem, let us consider a simple case. Suppose that {w1, w2}{w1, w2} is a linearly independent subset of an inner product space (and hence a basis for some two-dimensional subspace). We want to construct an orthogonal set from {w1, w2}{w1, w2} that spans the same subspace. Figure 6.1 suggests that the set {v1, v2}{v1, v2}, where v1=w1v1=w1 and v2=w2cw1v2=w2cw1, has this property if c is chosen so that v2v2 is orthogonal to w1w1.

A diagram of 4 vectors.

Figure 6.1

To find c, we need only solve the following equation:

0=v2, w1=w2cw1, w1=w2, w1cw1, w1.
0=v2, w1=w2cw1, w1=w2, w1cw1, w1.

So

c=w2, w1||w1||2.
c=w2, w1||w1||2.

Thus

v2=w2w2, w1||w1||2w1.
v2=w2w2, w1||w1||2w1.

The next theorem shows us that this process can be extended to any finite linearly independent subset.

Theorem 6.4.

Let V be an inner product space and S={w1, w2, , wn}S={w1, w2, , wn} be a linearly independent subset of V. Define S={v1, v2, , vn}S={v1, v2, , vn}, where v1=w1v1=w1 and

vk=wkk1j=1wk, vj||vj||2vjfor 2kn.
vk=wkj=1k1wk, vj||vj||2vjfor 2kn.
(1)

Then SS is an orthogonal set of nonzero vectors such that span(S)=span(S)span(S)=span(S).

Proof.

The proof is by mathematical induction on n, the number of vectors in S. For k=1, 2, , n,k=1, 2, , n, let Sk={w1, w2, , wk}Sk={w1, w2, , wk}. If n=1n=1, then the theorem is proved by taking S1=S1;S1=S1; i.e., v1=w10v1=w10. Assume then that the set Sk1={v1, v2, , vk1}Sk1={v1, v2, , vk1} with the desired properties has been constructed by the repeated use of (1). We show that the set Sk={v1, v2, , vk1, vk}Sk={v1, v2, , vk1, vk} also has the desired properties, where vkvk is obtained from Sk1Sk1 by (1). If vk=0vk=0, then (1) implies that wkspan(Sk1)=span(Sk1)wkspan(Sk1)=span(Sk1), which contradicts the assumption that SkSk is linearly independent. For 1ik11ik1, it follows from (1) that

vk, vi=wk, vik1j=1wk, vj||vj||2vj, vi=wk, viwk, vi||vi||2||vi||2=0,
vk, vi=wk, vij=1k1wk, vj||vj||2vj, vi=wk, viwk, vi||vi||2||vi||2=0,

since vj, vi=0vj, vi=0 if ijij by the induction assumption that Sk1Sk1 is orthogonal. Hence SkSk is an orthogonal set of nonzero vectors. Now, by (1), we have that span(Sk)span(Sk)span(Sk)span(Sk). But by Corollary 2 to Theorem 6.3, SkSk is linearly independent; so dim(span(Sk))=dim(span(Sk))=kdim(span(Sk))=dim(span(Sk))=k. Therefore span(Sk)=span(Sk)span(Sk)=span(Sk).

The construction of {v1, v2, , vn}{v1, v2, , vn} by the use of Theorem 6.4 is called the Gram—Schmidt process.

Example 4

In R4R4, let w1=(1, 0, 1, 0), w2=(1, 1, 1, 1),w1=(1, 0, 1, 0), w2=(1, 1, 1, 1), and w3=(0, 1, 2, 1)w3=(0, 1, 2, 1). Then {w1, w2, w3}{w1, w2, w3} is linearly independent. We use the Gram-Schmidt process to compute the orthogonal vectors v1, v2,v1, v2, and v3v3, and then we normalize these vectors to obtain an orthonormal set.

Take v1=w1=(1, 0, 1, 0)v1=w1=(1, 0, 1, 0). Then

v2=w2w2, v1||v1||2v1=(1, 1, 1, 1)22(1, 0, 1, 0)=(0, 1, 0, 1).
v2===w2w2, v1||v1||2v1(1, 1, 1, 1)22(1, 0, 1, 0)(0, 1, 0, 1).

Finally,

v3=w3w3, v1||v1||2v1w3, v2||v2||2v2=(0, 1, 2, 1)22(1, 0, 1, 0)22(0, 1, 0, 1)=(1, 0, 1, 0).
v3===w3w3, v1||v1||2v1w3, v2||v2||2v2(0, 1, 2, 1)22(1, 0, 1, 0)22(0, 1, 0, 1)(1, 0, 1, 0).

These vectors can be normalized to obtain the orthonormal basis {u1, u2, u3}{u1, u2, u3}, where

u1=1||v1||v1=12(1, 0, 1, 0),u2=1||v2||v2=12(0, 1, 0, 1),
u1=1||v1||v1=12(1, 0, 1, 0),u2=1||v2||v2=12(0, 1, 0, 1),

and

u3=v3||v3||=12(1, 0, 1, 0).
u3=v3||v3||=12(1, 0, 1, 0).

Example 5

Let V=P(R)V=P(R) with the inner product f(x), g(x)=11f(t)g(t) dtf(x), g(x)=11f(t)g(t) dt, and consider the subspace P2(R)P2(R) with the standard ordered basis ββ. We use the Gram-Schmidt process to replace ββ by an orthogonal basis {v1, v2, v3}{v1, v2, v3} for P2(R)P2(R), and then use this orthogonal basis to obtain an orthonormal basis for P2(R)P2(R).

Take v1=1v1=1. Then ||v1||2=1112 dt=2||v1||2=1112 dt=2, and x, v1=11t1 dt=0.x, v1=11t1 dt=0.. Thus

v2=xv1, x||v1||2=x02=x.
v2=xv1, x||v1||2=x02=x.

Furthermore,

x2, v1=11t21 dt=23andx2, v2=11t2t dt=0.
x2, v1=11t21 dt=23andx2, v2=11t2t dt=0.

Therefore

v3=x2x2, v1||v1||2v1x2, v2||v2||2v2=x21310x=x213.
v3===x2x2, v1||v1||2v1x2, v2||v2||2v2x21310xx213.

We conclude that {1, x, x213}{1, x, x213} is an orthogonal basis for P2(R)P2(R).

To obtain an orthonormal basis, we normalize v1, v2,v1, v2,, and v3v3 to obtain

u1=11112 dt=12,u2=x11t2 dt=32x,
u1=11112 dt=12,u2=x11t2 dt=32x,

and similarly,

u3=v3||v3||=58(3x21).
u3=v3||v3||=58(3x21).

Thus {u1, u2, u3}{u1, u2, u3} is the desired orthonormal basis for P2(R)P2(R).

Continuing to apply the Gram-Schmidt orthogonalization process to the basis {1, x, x2, }{1, x, x2, } for P(R), we obtain an orthogonal basis {v1, v2, v3 }{v1, v2, v3 }. For each n, the polynomial (1/vk(1))vk(1/vk(1))vk is called the kth Legendre polynomial. The first three Legendre polynomials are 1, x and 12(3x21)12(3x21). The set of Legendre polynomials is also an orthogonal basis for P(R).

The following result gives us a simple method of representing a vector as a linear combination of the vectors in an orthonormal basis.

Theorem 6.5.

Let V be a nonzero finite-dimensional inner product space. Then V has an orthonormal basis ββ. Furthermore, if β={v1, v2, , vn}β={v1, v2, , vn} and xVxV, then

x=ni=1x, vivi.
x=i=1nx, vivi.

Proof.

Let β0β0 be an ordered basis for V. Apply Theorem 6.4 to obtain an orthogonal set ββ of nonzero vectors with span(β)=span(β0)=Vspan(β)=span(β0)=V. By normalizing each vector in ββ, we obtain an orthonormal set ββ that generates V. By Corollary 2 to Theorem 6.3, ββ is linearly independent; therefore ββ is an orthonormal basis for V. The remainder of the theorem follows from Corollary 1 to Theorem 6.3.

Example 6

We use Theorem 6.5 to represent the polynomial f(x)=1+2x+3x2f(x)=1+2x+3x2 as a linear combination of the vectors in the orthonormal basis {u1, u2, u3}{u1, u2, u3} for P2(R)P2(R) obtained in Example 5. Observe that

f(x), u1=1112(1+2t+3t2) dt=22,f(x), u2=1132t(1+2t+3t2) dt=263,
f(x), u1f(x), u2==1112(1+2t+3t2) dt=22,1132t(1+2t+3t2) dt=263,

and

f(x), u3=1158(3t21)(1+2t+3t2) dt=2105.
f(x), u3=1158(3t21)(1+2t+3t2) dt=2105.

Therefore f(x)=22u1+263u2+2105u3f(x)=22u1+263u2+2105u3.

Theorem 6.5 gives us a simple method for computing the entries of the matrix representation of a linear operator with respect to an orthonormal basis.

Corollary.

Let V be a finite-dimensional inner product space with an orthonormal basis β={v1, v2, , vn}β={v1, v2, , vn}. Let T be a linear operator on V, and let A=[T]βA=[T]β. Then for any i and j, Aij=T(vj), viAij=T(vj), vi.

Proof. From Theorem 6.5, we have

T(vj)=ni=1T(vj), vivi.
T(vj)=i=1nT(vj), vivi.

Hence Aij=T(vj), viAij=T(vj), vi.

The scalars x, vix, vi given in Theorem 6.5 have been studied extensively for special inner product spaces. Although the vectors v1, v2, , vnv1, v2, , vn were chosen from an orthonormal basis, we introduce a terminology associated with orthonormal sets ββ in more general inner product spaces.

Definition.

Let ββ be an orthonormal subset (possibly infinite) of an inner product space V, and let xVxV. We define the Fourier coefficients of x relative to ββ to be the scalars x, yx, y, where yβyβ.

In the first half of the 19th century, the French mathematician Jean Baptiste Fourier was associated with the study of the scalars

2π0f(t) sin nt dtand2π0f(t) cos nt dt,
2π0f(t) sin nt dtand2π0f(t) cos nt dt,

or in the complex case,

cn=12π2π0f(t)eint dt,
cn=12π2π0f(t)eint dt,

for a function f. In the context of Example 9 of Section 6.1, we see that cn=f, fncn=f, fn, where fn(t)=eint;fn(t)=eint; that is, cncn is the nth Fourier coefficient for a continuous function fVfV relative to S. The coefficients cncn are the “classical” Fourier coefficients of a function, and the literature concerning their behavior is extensive. We learn more about Fourier coefficients in the remainder of this chapter.

Example 7

Let S={eint: n is an integer}S={eint: n is an integer}. In Example 9 of Section 6.1, S was shown to be an orthonormal set in H. We compute the Fourier coefficients of f(t)=tf(t)=t relative to S. Using integration by parts, we have, for n0n0,

f,  fn=12π2π0¯teint dt=12π2π0teint dt=1in,
f,  fn=12π2π0teint¯¯¯¯¯¯ dt=12π2π0teint dt=1in,

and, for n=0n=0,

f, 1=12π2π0t(1) dt=π.
f, 1=12π2π0t(1) dt=π.

As a result of these computations, and using Exercise 16 of this section, we obtain an upper bound for the sum of a special infinite series as follows:

||f||21n=k|f,  fn|2+|f, 1|2+kn=1|f,  fn|2=1n=k1n2+π2+kn=11n2=2kn=11n2+π2
||f||2==n=k1|f,  fn|2+|f, 1|2+n=1k|f,  fn|2n=k11n2+π2+n=1k1n22n=1k1n2+π2

for every k. Now, using the fact that ||f||2=43π2||f||2=43π2, we obtain

43π22kn11n2+π2,
43π22n1k1n2+π2,

or

π26kn=11n2.
π26n=1k1n2.

Because this inequality holds for all k, we may let kk to obtain

π26n=11n2.
π26n=11n2.

Additional results may be produced by replacing f by other functions.

We are now ready to proceed with the concept of an orthogonal complement.

Definition.

Let S be a nonempty subset of an inner product space V. We define SS (read “S perp”) to be the set of all vectors in V that are orthogonal to every vector in S; that is, S={xV:x, y=0 for all yS}S={xV:x, y=0 for all yS}. The set SS is called the orthogonal complement of S.

It is easily seen that SS is a subspace of V for any subset S of V.

Example 8

The reader should verify that {0}=V{0}=V and V={0}V={0} for any inner product space V.

Example 9

If V=R3V=R3 and S={e3}S={e3}, then SS equals the xy-plane (see Exercise 5).

Exercise 18 provides an interesting example of an orthogonal complement in an infinite-dimensional inner product space.

Consider the problem in R3R3 of finding the distance from a point P to a plane W. (See Figure 6.2.) Problems of this type arise in many settings. If we let y be the vector determined by 0 and P, we may restate the problem as follows: Determine the vector u in W that is “closest” to y. The desired distance is clearly given by ||yu||||yu||. Notice from the figure that the vector z=yuz=yu is orthogonal to every vector in W, and so zWzW.

A diagram of a plan and a right triangle.

Figure 6.2

The next result presents a practical method of finding u in the case that W is a finite-dimensional subspace of an inner product space.

Theorem 6.6.

Let W be a finite-dimensional subspace of an inner product space V, and let yVyV. Then there exist unique vectors uWuW and zWzW such that y=u+zy=u+z. Furthermore, if {v1, v2, , vk}{v1, v2, , vk} is an orthonormal basis for W, then

u=ki=1y, vivi.
u=i=1ky, vivi.

Proof.

Let {v1, v2, , vk}{v1, v2, , vk} be an orthonormal basis for W, let u be as defined in the preceding equation, and let z=yuz=yu. Clearly uWuW and y=u+zy=u+z.

To show that zWzW, it suffices to show, by Exercise 7, that z is orthogonal to each vjvj. For any j, we have

z, vj=(yki=1y, vivi),vj=y, vjki=1y, vivi, vj=y, vjy, vj=0.
z, vj==(yi=1ky, vivi),vj=y, vji=1ky, vivi, vjy, vjy, vj=0.

To show uniqueness of u and z, suppose that y=u+z=u+z, where uW and zW. Then uu=zzWW={0}. Therefore uu and z=z.

Corollary.

In the notation of Theorem 6.6, the vector u is the unique vector in W that is “closest” to y; that is, for any xW,||yx||||yu||, and this inequality is an equality if and only if x=u.

Proof.

As in Theorem 6.6, we have that y=u+z, where zW. Let xW. Then ux is orthogonal to z, so, by Exercise 10 of Section 6.1, we have

||yx||2=||u+zx||2=||(ux)+z||2=||ux||2+||z||2||z||2=||yu||2.

Now suppose that ||yx||=||yu||. Then the inequality above becomes an equality, and therefore ||ux||2+||z||2=||z||2. It follows that ||ux||=0, and hence x=u. The proof of the converse is obvious.

The vector u in the corollary is called the orthogonal projection of y on W. We will see the importance of orthogonal projections of vectors in the application to least squares in Section 6.3.

Example 10

Let V=P3(R) with the inner product

f(x), g(x)=11f(t)g(t) dtfor all f(x), g(x)V.

We compute the orthogonal projection f1(x) of f(x)=x3 on P2(R).

By Example 5,

{u1, u2, u3}={12, 32x, 58(3x21)}

is an orthonormal basis for P2(R). For these vectors, we have

f(x), u1=11t312dt=0,f(x), u2=11t332t dt=65,

and

f(x), u3=11t358(3t21) dt=0.

Hence

f1(x)=f(x), u1u1+f(x), u2u2+f(x), u3u3=35x.

It was shown (Corollary 2 to the replacement theorem, p. 48) that any linearly independent set in a finite-dimensional vector space can be extended to a basis. The next theorem provides an interesting analog for an orthonormal subset of a finite-dimensional inner product space.

Theorem 6.7.

Suppose that S={v1, v2, , vk} is an orthonormal set in an n-dimensional inner product space V. Then

  1. (a) S can be extended to an orthonormal basis {v1, v2, , vk, vk+1, , vn} for V.

  2. (b) If W=span(S), then S1={vk+1, vk+2, , vn} is an orthonormal basis for W (using the preceding notation).

  3. (c) If W is any subspace of V, then dim(V)=dim(W)+dim(W).

Proof.

(a) By Corollary 2 to the replacement theorem (p. 48), S can be extended to an ordered basis S={v1, v2, , vk, wk+1, , wn} for V. Now apply the Gram-Schmidt process to S. The first k vectors resulting from this process are the vectors in S by Exercise 8, and this new set spans V. Normalizing the last nk vectors of this set produces an orthonormal set that spans V. The result now follows.

(b) Because S1 is a subset of a basis, it is linearly independent. Since S1 is clearly a subset of W, we need only show that it spans W. Note that, for any xV, we have

x=ni=1x, vivi.

If xW, then x, vi=0 for 1ik. Therefore

x=ni=k+1x, vivispan(S1).

(c) Let W be a subspace of V. It is a finite-dimensional inner product space because V is, and so it has an orthonormal basis {v1, v2, , vk}. By (a) and (b), we have

dim(V)=n=k+(nk)=dim(W)+dim(W).

Example 11

Let W=span({e1, e2}) in F3. Then x=(a, b, c)W if and only if 0=x, e1=a and 0=x, e2=b. So x=(0, 0, c), and therefore W=span({e3}). One can deduce the same result by noting that e3W and, from (c), that dim(W)=32=1.

Exercises

  1. Label the following statements as true or false.

    1. (a) The Gram-Schmidt orthogonalization process produces an orthonormal set from an arbitrary linearly independent set.

    2. (b) Every nonzero finite-dimensional inner product space has an orthonormal basis.

    3. (c) The orthogonal complement of any set is a subspace.

    4. (d) If {v1, v2, , vn} is a basis for an inner product space V, then for any xV the scalars x, vi are the Fourier coefficients of x.

    5. (e) An orthonormal basis must be an ordered basis.

    6. (f) Every orthogonal set is linearly independent.

    7. (g) Every orthonormal set is linearly independent.

  2. In each part, apply the Gram–Schmidt process to the given subset S of the inner product space V to obtain an orthogonal basis for span(S). Then normalize the vectors in this basis to obtain an orthonormal basis β for span(S), and compute the Fourier coefficients of the given vector relative to β. Finally, use Theorem 6.5 to verify your result.

    1. (a) V=R3, S={(1, 0, 1), (0, 1, 1), (1, 3, 3)}, and x=(1, 1, 2)

    2. (b) V=R3, S={(1, 1, 1), (0, 1, 1), (0, 0, 1)}, and x=(1, 0, 1)

    3. (c) V=P2(R) with the inner product f(x), g(x)=10f(t)g(t) dt, S={1, x, x2}, and h(x)=1+x

    4. (d) V=span(S), where S={(1, i, 0), (1i, 2, 4i)} and x=(3+i, 4i, 4)

    5. (e) V=R4, S={(2, 1, 2, 4), (2, 1, 5, 5), (1, 3, 7, 11)}, and x=(11, 8, 4, 18)

    6. (f) V=R4, S={(1, 2, 1, 3), (3, 6, 3, 1), (1, 4, 2, 8)}, and x=(1, 2, 1, 1)

    7. (g) V=M2×2(R), S={(3511), (1951), (71726)}, and A=(12748)

    8. (h) V=M2×2(R), S={(2221), (11425), (412316)}, and A=(862513)

    9. (i) V=span(S) with the inner product f, g=π0f(t)g(t) dt, S={sin t, cos t, 1, t}, and h(t)=2t+1

    10. (j) V=C4, S={(1, i, 2i, 1), (2+3i, 3i, 1i, 2i), (1+7i, 6+10i, 114i, 3+4i)}, and x=(2+7i, 6+9i, 93i, 4+4i)

    11. (k) V=C4, S={(4, 32i, i, 14i), (15i, 54i, 3+5i, 72i), (27i, 76i, 15+25i, 76i)}, and x=(137i, 12+3i, 3911i, 26+5i)

    12. (l) V=M2×2(C), S={(1i23i2+2i4+i), (8i433i4+4i), (2538i213i1278i7+24i)}, and A=(2+8i13+i1010i99i)

    13. (m) V=M2×2(C), S={(1+ii2i1+3i), (17i98i1+10i62i), (11132i3431i7126i715i)}, and A=(7+5i3+18i96i3+7i)

  3. In R2, let

    β={(12, 12), (12, 12)}.

    Find the Fourier coefficients of (3, 4) relative to β .

  4. Let S={(1, 0, i), (1, 2, 1)} in C3. Compute S.

  5. Let S0={x0}, where x0 is a nonzero vector in R3. Describe S0 geometrically. Now suppose that S={x1, x2} is a linearly independent subset of R3. Describe S geometrically.

  6. Let V be an inner product space, and let W be a finite-dimensional subspace of V. If xW, prove that there exists yV such that yW, but x, y0. Hint: Use Theorem 6.6.

  7. Let β be a basis for a subspace W of an inner product space V, and let zV. Prove that zW if and only if z, v=0 for every vβ.

  8. Prove that if {w1, w2, , wn} is an orthogonal set of nonzero vectors, then the vectors v1, v2, , vn derived from the Gram-Schmidt process satisfy vi=wi for i=1,2,,n. Hint: Use mathematical induction.

  9. Let W=span({(i, 0, 1)}) in C3. Find orthonormal bases for W and W.

  10. Let W be a finite-dimensional subspace of an inner product space V. Prove that V=WW. Using the definition on page 76, prove that there exists a projection T on W along W that satisfies N(T)=W. In addition, prove that ||T(x)||||x|| for all xV. Hint: Use Theorem 6.6 and Exercise 10 of Section 6.1.

  11. Let A be an n×n matrix with complex entries. Prove that AA*=I if and only if the rows of A form an orthonormal basis for Cn. Visit goo.gl/iKcC4S for a solution.

  12. Prove that for any matrix AMm×n(F), (R(LA*))=N(LA).

  13. Let V be an inner product space, S and S0 be subsets of V, and W be a finite-dimensional subspace of V. Prove the following results.

    1. (a) S0S implies that SS0.

    2. (b) S(S); so span(S)(S).

    3. (c) W=(W). Hint: Use Exercise 6.

    4. (d) V=WW. (See the exercises of Section 1.3.)

  14. Let W1 and W2 be subspaces of a finite-dimensional inner product space. Prove that (W1+W2)=W1W2 and (W1W2)=W1+W2. (See the definition of the sum of subsets of a vector space on page 22.) Hint for the second equation: Apply Exercise 13(c) to the first equation.

  15. Let V be a finite-dimensional inner product space over F.

    1. (a) Parseval’s Identity. Let {v1, v2, , vn} be an orthonormal basis for V. For any x, yV prove that

      x, y=ni=1x, vi¯y, vi.
    2. (b) Use (a) to prove that if β is an orthonormal basis for V with inner product , , then for any x, yV

      ϕβ(x), ϕβ(y)=[x]β, [y]β=x, y,

      where ,  is the standard inner product on Fn.

    1. (a) Bessel’s Inequality. Let V be an inner product space, and let S={v1, v2, , vn} be an orthonormal subset of V. Prove that for any xV we have

      ||x||2ni=1|x, vi|2.

      Hint: Apply Theorem 6.6 to xV and W=span(S). Then use Exercise 10 of Section 6.1.

    2. (b) In the context of (a), prove that Bessel’s inequality is an equality if and only if xspan(S).

  16. Let T be a linear operator on an inner product space V. If T(x), y=0 for all x, yV, prove that T=T0. In fact, prove this result if the equality holds for all x and y in some basis for V.

  17. Let V=C([1, 1]). Suppose that We and Wo denote the subspaces of V consisting of the even and odd functions, respectively. (See Exercise 22 of Section 1.3.) Prove that We=Wo, where the inner product on V is defined by

    f, g=11f(t)g(t) dt.
  18. In each of the following parts, find the orthogonal projection of the given vector on the given subspace W of the inner product space V.

    1. (a) V=R2, u=(2, 6), and W={(x, y): y=4x}

    2. (b) V=R3, u=(2, 1, 3), and W={(x, y, z): x+3y2z=0}

    3. (c) V=P(R) with the inner product f(x), g(x)=10f(t)g(t) dt, h(x)=4+3x2x2, and W=P1(R)

  19. In each part of Exercise 19, find the distance from the given vector to the subspace W.

  20. Let V=C([1, 1]) with the inner product f, g=11f(t)g(t) dt, and let W be the subspace P2(R), viewed as a space of functions. Use the orthonormal basis obtained in Example 5 to compute the “best” (closest) second-degree polynomial approximation of the function h(t)=et on the interval [1, 1].

  21. Let V=C([0, 1]) with the inner product f, g=10f(t)g(t) dt. Let W be the subspace spanned by the linearly independent set {t, t}.

    1. (a) Find an orthonormal basis for W.

    2. (b) Let h(t)=t2. Use the orthonormal basis obtained in (a) to obtain the “best” (closest) approximation of h in W.

  22. Let V be the vector space defined in Example 5 of Section 1.2, the space of all sequences σ in F (where F=R or F=C) such that σ(n)0 for only finitely many positive integers n. For σ, μV, we define σ, μ=n=1σ(n)¯μ(n). Since all but a finite number of terms of the series are zero, the series converges.

    1. (a) Prove that ,  is an inner product on V, and hence V is an inner product space.

    2. (b) For each positive integer n, let en be the sequence defined by en(k)=δnk, where δnk is the Kronecker delta. Prove that {e1, e2, } is an orthonormal basis for V.

    3. (c) Let σn=e1+en and W=span({σn: n2}.

      1. (i) Prove that e1W, so WV.

      2. (ii) Prove that W={0}, and conclude that W(W).

        Thus the assumption in Exercise 13(c) that W is finite-dimensional is essential.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.62.105