6.3 The Adjoint of a Linear Operator

In Section 6.1, we defined the conjugate transpose A* of a matrix A. For a linear operator T on an inner product space v, we now define a related linear operator on V called the adjoint of T, whose matrix representation with respect to any orthonormal basis ββ for V is [T]*β[T]*β. The analogy between conjugation of complex numbers and adjoints of linear operators will become apparent. We first need a preliminary result.

Let V be an inner product space, and let yVyV. The function g:VFg:VF defined by g(x)=x, yg(x)=x, y is clearly linear. More interesting is the fact that if V is finite-dimensional, every linear transformation from V into F is of this form.

Theorem 6.8.

Let V be a finite-dimensional inner product space over F, and let g: VFg: VF be a linear transformation. Then there exists a unique vector yVyV such that g(x)=x, yg(x)=x, y for all xVxV.

Proof.

Let β={v1, v2, , vn}β={v1, v2, , vn} be an orthonormal basis for V, and let

y=ni=1¯g(vi)vi.
y=i=1ng(vi)¯¯¯¯¯¯¯vi.

Define h: VFh: VF, by h(x)=x, yh(x)=x, y, which is clearly linear. Furthermore, for 1jn1jn we have

h(vj)=vj, y=vj, ni=1¯g(vi)vi=ni=1g(vi)(vj, vi)=ni=1g(vi)δji=g(vj).
h(vj)=vj, y=vj, i=1ng(vi)¯¯¯¯¯¯¯vi=i=1ng(vi)(vj, vi)=i=1ng(vi)δji=g(vj).

Since g and h agree on ββ , we have that g=hg=h by the corollary to Theorem 2.6 (p. 73).

To show that y is unique, suppose that g(x)=x, yg(x)=x, y for all x. Then x, y=x, yx, y=x, y for all x; so by Theorem 6.1(e) (p. 331), we have y=yy=y.

Example 1

Define g: R2Rg: R2R by g(a1, a2)=2a1+a2;g(a1, a2)=2a1+a2; clearly g is a linear transformation. Let β={e1, e2}β={e1, e2}, and let y=g(e1)e1+g(e2)e2=2e1+e2=(2, 1)y=g(e1)e1+g(e2)e2=2e1+e2=(2, 1), as in the proof of Theorem 6.8. Then g(a1, a2)=(a1, a2), (2, 1)=2a1+a2g(a1, a2)=(a1, a2), (2, 1)=2a1+a2.

Theorem 6.9.

Let V be a finite-dimensional inner product space, and let T be a linear operator on V. Then there exists a unique function T*: VVT*: VV such that T(x), y=x, T*(y)T(x), y=x, T*(y) for all x, yVx, yV. Furthermore, T* is linear.

Proof.

Let yVyV. Define g: VFg: VF by g(x)=T(x), yg(x)=T(x), y for all xVxV. We first show that g is linear. Let x1, x2Vx1, x2V and cFcF. Then

g(cx1+x2)=T(cx1+x2), y=cT(x1)+T(x2), y=cT(x1), y+T(x2), y=cg(x1)+g(x2).
g(cx1+x2)==T(cx1+x2), y=cT(x1)+T(x2), ycT(x1), y+T(x2), y=cg(x1)+g(x2).

Hence g is linear.

We now apply Theorem 6.8 to obtain a unique vector yVyV such that g(x)=x, yg(x)=x, y; that is, T(x), y=x, yT(x), y=x, y for all xVxV. Defining T*: VVT*: VV by T*(y)=yT*(y)=y, we have T(x), y=x, T*(y)T(x), y=x, T*(y).

To show that T* is linear, let y1, y2Vy1, y2V and cFcF. Then for any xVxV, we have

x, T*(cy1+y2)=T(x), cy1+y2=ˉcT(x), y1+T(x), y2=ˉcx, T*(y1)+x, T*(y2)=x, cT*(y1)+T*(y2).
x, T*(cy1+y2)====T(x), cy1+y2c¯T(x), y1+T(x), y2c¯x, T*(y1)+x, T*(y2)x, cT*(y1)+T*(y2).

Since x is arbitrary, T*(cy1+y2)=cT*(y1)+T*(y2)T*(cy1+y2)=cT*(y1)+T*(y2) by Theorem 6.1(e) (p. 331).

Finally, we need to show that T* is unique. Suppose that U: VVU: VV is linear and that it satisfies T(x), y=x, U(y)T(x), y=x, U(y) for all x, yVx, yV. Then x, T*(y)=x, U(y)x, T*(y)=x, U(y) for all x, yVx, yV, so T*=UT*=U.

The linear operator T* described in Theorem 6.9 is called the adjoint of the operator T. The symbol T* is read “T star.”

Thus T* is the unique operator on V satisfying T(x), y=x, T*(y)T(x), y=x, T*(y) for all x, yVx, yV. Note that we also have

x, T(y)=¯T(y), x=¯y, T*(x)=T*(x), y;
x, T(y)=T(y), x¯¯¯¯¯¯¯¯¯¯¯¯¯=y, T*(x)¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯=T*(x), y;

so x, T(y)=T*(x), yx, T(y)=T*(x), y for all x, yVx, yV. We may view these equations symbolically as adding a * to T when shifting its position inside the inner product symbol.

For an infinite-dimensional inner product space, the adjoint of a linear operator T may be defined to be the function T* such that T(x), y=x, T*(y)T(x), y=x, T*(y) for all x, yVx, yV, provided it exists. Although the uniqueness and linearity of T* follow as before, the existence of the adjoint is not guaranteed (see Exercise 24). The reader should observe the necessity of the hypothesis of finite-dimensionality in the proof of Theorem 6.8. Many of the theorems we prove about adjoints, nevertheless, do not depend on V being finite-dimensional.

Theorem 6.10 is a useful result for computing adjoints.

Theorem 6.10.

Let V be a finite-dimensional inner product space, and let ββ be an orthonormal basis for V. If T is a linear operator on V, then

[T*]β=[T]*β.
[T*]β=[T]*β.

Proof.

Let A=[T]β, B=[T*]β,A=[T]β, B=[T*]β, and β={v1, v2, , vn}β={v1, v2, , vn}. Then from the corollary to Theorem 6.5 (p. 344), we have

Bij=T*(vj), vi=¯vi, T*(vj)=¯T(vi), vj=ˉAji=(A*)ij.
Bij=T*(vj), vi=vi, T*(vj)¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯=T(vi), vj¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯=A¯¯¯ji=(A*)ij.

Hence B=A*B=A*.

Corollary.

Let A be an n×nn×n matrix. Then LA*=(LA)*LA*=(LA)*.

Proof.

If ββ is the standard ordered basis for FnFn, then, by Theorem 2.16 (p. 94), we have [LA]β=A[LA]β=A. Hence [(LA)*]β=[LA]*β=A*=[LA*]β[(LA)*]β=[LA]*β=A*=[LA*]β, and so (LA)*=LA*(LA)*=LA*.

As an illustration of Theorem 6.10, we compute the adjoint of a specific linear operator.

Example 2

Let T be the linear operator on C2C2 defined by T(a1, a2)=(2ia1+3a2, a1a2)T(a1, a2)=(2ia1+3a2, a1a2). If ββ is the standard ordered basis for C2C2, then

[T]β=(2i311).
[T]β=(2i131).

So

[T*]β=[T]*β=(2i131).
[T*]β=[T]*β=(2i311).

Hence

T*(a1, a2)=(2ia1+a2, 3a1a2).
T*(a1, a2)=(2ia1+a2, 3a1a2).

The following theorem suggests an analogy between the conjugates of complex numbers and the adjoints of linear operators.

Theorem 6.11.

Let V be an inner product space, and let T and U be linear operators on V whose adjoints exist. Then

  1. (a) T+UT+U has an adjoint, and (T+U)*=T*+U*(T+U)*=T*+U*.

  2. (b) cT has an adjoint, and (cT)*=ˉcT*(cT)*=c¯T* for any cFcF.

  3. (c) TU has an adjoint, and (TU)*=U*T*(TU)*=U*T*.

  4. (d) T* has an adjoint, and T**=TT**=T.

  5. (e) I has an adjoint, and I*=II*=I.

Proof.

We prove (a) and (d); the rest are proved similarly. Let x, yVx, yV.

(a) Because

(T+U)(x), y=T(x)+U(x), y=x, T*(y)+x, U*(y)=x, T*(y)+U*(y)=x, (T*+U*)(y),
(T+U)(x), y===T(x)+U(x), yx, T*(y)+x, U*(y)x, T*(y)+U*(y)=x, (T*+U*)(y),

it follows that (T+U)*(T+U)* exists and is equal to T*+U*T*+U*.

(d) Similarly, since

T*(x), y,=x, T(y),
T*(x), y,=x, T(y),

(d) follows.

Unless stated otherwise, for the remainder of this chapter we adopt the convention that a reference to the adjoint of a linear operator on an infinite-dimensional inner product space assumes its existence.

Corollary.

Let A and B be n×nn×n matrices. Then

  1. (a) (A+B)*=A*+B*.(A+B)*=A*+B*.

  2. (b) (cA)*=ˉcA*(cA)*=c¯A* for all cF.cF.

  3. (c) (AB)*=B*A*.(AB)*=B*A*.

  4. (d) A**=A.A**=A.

  5. (e) I*=II*=I

Proof.

We prove only (c); the remaining parts can be proved similarly.

Since L(AB)*=(LAB)*=(LALB)*=(LB)*(LA)*=LB*LA*=LB*A*L(AB)*=(LAB)*=(LALB)*=(LB)*(LA)*=LB*LA*=LB*A*, we have (AB)*=B*A*(AB)*=B*A*.

In the preceding proof, we relied on the corollary to Theorem 6.10. An alternative proof, which holds even for nonsquare matrices, can be given by appealing directly to the definition of the conjugate transpose of a matrix (see Exercise 5).

Least Squares Approximation

Consider the following problem: An experimenter collects data by taking measurements y1, y2, , ymy1, y2, , ym at times t1, t2, , tmt1, t2, , tm, respectively. For example, he or she may be measuring unemployment at various times during some period. Suppose that the data (t1, y1), (t2, y2), , (tm, ym)(t1, y1), (t2, y2), , (tm, ym) are plotted as points in the plane. (See Figure 6.3.) From this plot, the experimenter feels that there exists an essentially linear relationship between y and t, say y=ct+dy=ct+d, and would like to find the constants c and d so that the line y=ct+dy=ct+d represents the best possible fit to the data collected. One such estimate of fit is to calculate the error E that represents the sum of the squares of the vertical distances from the points to the line; that is,

E=mi=1(yictid)2.
E=i=1m(yictid)2.
A graph to measure unemployment at various times during a particular period.

Figure 6.3

Thus the problem is reduced to finding the constants c and d that minimize E. (For this reason the line y=ct+dy=ct+d is called the least squares line.) If we let

A=(t11t21tm1), x=(cd),andy=(y1y2ym),
A=t1t2tm111, x=(cd),andy=y1y2ym,

then it follows that E=||yAx||2E=||yAx||2.

We develop a general method for finding an explicit vector x0Fnx0Fn that minimizes E; that is, given an m×nm×n matrix A, we find x0Fnx0Fn such that ||yAx0||||yAx||||yAx0||||yAx|| for all vectors xFnxFn. This method not only allows us to find the linear function that best fits the data, but also, for any positive integer k, the best fit using a polynomial of degree at most k.

First, we need some notation and two simple lemmas. For x, yFnx, yFn, let x, ynx, yn denote the standard inner product of x and y in FnFn. Recall that if x and y are regarded as column vectors, then x, yn=y*xx, yn=y*x.

Lemma 1.

Let AMm×n(F), xFnAMm×n(F), xFn, and yFmyFm. Then

Ax, ym=x, A*yn.
Ax, ym=x, A*yn.

Proof.

By a generalization of the corollary to Theorem 6.11 (see Exercise 5(b)), we have

Ax, ym=y*(Ax)=(y*A)x=(A*y)*x=x, A*yn.
Ax, ym=y*(Ax)=(y*A)x=(A*y)*x=x, A*yn.

Lemma 2.

Let AMm×n(F)AMm×n(F). Then rank(A*A)=rank(A)rank(A*A)=rank(A).

Proof.

By the dimension theorem, we need only show that, for xFnxFn, we have A*Ax=0A*Ax=0 if and only if Ax=0Ax=0. Clearly, Ax=0Ax=0 implies that A*Ax=0A*Ax=0. So assume that A*Ax=0A*Ax=0. Then

0=A*Ax, xn=Ax, A**xm=Ax,Axm,
0=A*Ax, xn=Ax, A**xm=Ax,Axm,

so that Ax=0Ax=0.

Corollary.

If A is an m×nm×n matrix such that rank(A)=nrank(A)=n, then A*AA*A is invertible.

Now let A be an m×nm×n matrix and yFmyFm. Define W={Ax: xFn};W={Ax: xFn}; that is, W=R(LA)W=R(LA). By the corollary to Theorem 6.6 (p. 347), there exists a unique vector in W that is closest to y. Call this vector Ax0Ax0, where x0Fnx0Fn. Then ||Ax0y||||Axy||||Ax0y||||Axy|| for all xFnxFn; so x0x0 has the property that E=||Ax0y||E=||Ax0y|| is minimal, as desired.

To develop a practical method for finding such an x0, we note from Theorem 6.6 and its corollary that Ax0yW;Ax0yW; so Ax, Ax0ym=0Ax, Ax0ym=0 for all xFnxFn. Thus, by Lemma 1, we have that x, A*(Ax0y)n=0x, A*(Ax0y)n=0 for all xFnxFn; that is, A*(Ax0y)=0A*(Ax0y)=0. So we need only find a solution x0x0 to A*Ax=A*yA*Ax=A*y. If, in addition, we assume that rank(A)=nrank(A)=n, then by Lemma 2 we have x0=(A*A)1A*yx0=(A*A)1A*y. We summarize this discussion in the following theorem.

Theorem 6.12.

Let AMm×n(F)AMm×n(F) and yFmyFm. Then there exists x0Fnx0Fn such that (A*A)x0=A*y(A*A)x0=A*y and ||Ax0y||||Axy||||Ax0y||||Axy|| for all xFnxFn. Furthermore, if rank(A)=nrank(A)=n, then x0=(A*A)1A*yx0=(A*A)1A*y.

To return to our experimenter, let us suppose that the data collected are (1, 2), (2, 3), (3, 5), and (4, 7). Then

A=(11213141)andy=(2357);
A=12341111andy=2357;

hence

A*A=(12341111)(11213141)=(3010104).
A*A=(11213141)12341111=(3010104).

Thus

(A*A)1=120(4101030).
(A*A)1=120(4101030).

Therefore

(cd)=x0=120(4101030)(12341111)(2357)=(1.70).
(cd)=x0=120(4101030)(11213141)2357=(1.70).

It follows that the line y=1.7ty=1.7t is the least squares line. The error E may be computed directly as ||Ax0y||2=0.3||Ax0y||2=0.3.

Suppose that the experimenter chose the times ti(1im)ti(1im) to satisfy

mi=1ti=0.
i=1mti=0.

Then the two columns of A would be orthogonal, so A * A would be a diagonal matrix (see Exercise 19). In this case, the computations are greatly simplified.

In practice, the m×2m×2 matrix A in our least squares application has rank equal to two, and hence A * A is invertible by the corollary to Lemma 2. For, otherwise, the first column of A is a multiple of the second column, which consists only of ones. But this would occur only if the experimenter collects all the data at exactly one time.

Finally, the method above may also be applied if, for some k, the experimenter wants to fit a polynomial of degree at most k to the data. For instance, if a polynomial y=ct2+dt+ey=ct2+dt+e of degree at most 2 is desired, the appropriate model is

x=(cde),y=(y1y2ym),andA=(t21t11t2mtm1).
x=cde,y=y1y2ym,andA=t21t2mt1tm11.

Minimal Solutions to Systems of Linear Equations

Even when a system of linear equations Ax=bAx=b is consistent, there may be no unique solution. In such cases, it may be desirable to find a solution of minimal norm. A solution s to Ax=bAx=b is called a minimal solution if ||s||||u||||s||||u|| for all other solutions u. The next theorem assures that every consistent system of linear equations has a unique minimal solution and provides a method for computing it.

Theorem 6.13.

Let AMm×n(F)AMm×n(F) and bFmbFm. Suppose that Ax=bAx=b is consistent. Then the following statements are true.

  1. There exists exactly one minimal solution s of Ax=bAx=b, and sR(LA*)sR(LA*).

  2. The vector s is the only solution to Ax=bAx=b that lies in R(LA*)R(LA*); in fact, if u satisfies (AA*)u=b(AA*)u=b, then s=A*us=A*u.

Proof.

(a) For simplicity of notation, we let W=R(LA*)W=R(LA*) and W=N(LA)W=N(LA). Let x be any solution to Ax=bAx=b. By Theorem 6.6 (p. 347), x=s+yx=s+y for some sWsW and yWyW. But W=WW=W by Exercise 12, and therefore b=Ax=As+Ay=Asb=Ax=As+Ay=As. So s is a solution to Ax=bAx=b that lies in W. To prove (a), we need only show that s is the unique minimal solution. Let v be any solution to Ax=bAx=b. By Theorem 3.9 (p. 172), we have that v=s+uv=s+u, where uWuW. Since sWsW, which equals WW by Exercise 12, we have

||v||2=||s+u||2=||s||2+||u||2||s||2
||v||2=||s+u||2=||s||2+||u||2||s||2

by Exercise 10 of Section 6.1. Thus s is a minimal solution. We can also see from the preceding calculation that if ||v||=||s||||v||=||s||, then u=0u=0; hence v=sv=s. Therefore s is the unique minimal solution to Ax=bAx=b, proving (a).

(b) Assume that v is also a solution to Ax=bAx=b that lies in W. Then

vsWW=WW={0};
vsWW=WW={0};

so v=sv=s.

Finally, suppose that (AA*)u=b(AA*)u=b, and let v=A*uv=A*u. Then vWvW and Av=bAv=b. Therefore s=v=A*us=v=A*u by the discussion above.

Example 3

Consider the system

x+2y+z=4xy+2z=11x+5y=19.
xxx++2yy5y++z2z===41119.

Let

A=(121112150)andb=(41119).

To find the minimal solution to this system, we must first find some solution u to AA*x=b. Now

AA*=(611116411426);

so we consider the system

6x+y+11z=4x+6y4z=1111x4y+26z=19,

for which one solution is

u=(120).

(Any solution will suffice.) Hence

s=A*u=(143)

is the minimal solution to the given system.

Exercises

  1. Label the following statements as true or false. Assume that the underlying inner product spaces are finite-dimensional.

    1. (a) Every linear operator has an adjoint.

    2. (b) Every linear operator on V has the form xx, y for some yV.

    3. (c) For every linear operator T on V and every ordered basis β for V, we have [T*]β=([T]β)*.

    4. (d) The adjoint of a linear operator is unique.

    5. (e) For any linear operators T and U and scalars a and b,

      (aT+bU)*=aT*+bU*.
    6. (f) For any n×n matrix A, we have (LA)*=LA*.

    7. (g) For any linear operator T, we have (T*)*=T.

  2. For each of the following inner product spaces V (over F) and linear transformations g: VF, find a vector y such that g(x)=x, y for all xV.

    1. (a) V=R3, g(a1, a2, a3)=a12a2+4a3

    2. (b) V=C2, g(z1, z2,)=z12z2

    3. (c) V=P2(R) with f(x), h(x)=10f(t)h(t) dt, g(f)=f(0)+f(1)

  3. For each of the following inner product spaces V and linear operators T on V, evaluate T* at the given vector in V.

    1. (a) V=R2, T(a, b)=(2a+b, a3b), x=(3, 5).

    2. (b) V=C2, T(z1, z2)=(2z1+iz2, (1i)z1), x=(3i, 1+2i).

    3. (c) V=P1(R) with f(x), g(x)=11f(t)g(t) dt, T(f)=f+3f,f(t)=42t

  4. Complete the proof of Theorem 6.11.

    1. (a) Complete the proof of the corollary to Theorem 6.11 by using Theorem 6.11, as in the proof of (c).

    2. (b) State a result for nonsquare matrices that is analogous to the corollary to Theorem 6.11, and prove it using a matrix argument.

  5. Let T be a linear operator on an inner product space V. Let U1=T+T* and U2=TT*. Prove that U1=U*1 and U2=U*2.

  6. Give an example of a linear operator T on an inner product space V such that N(T)N(T*).

  7. Let V be a finite-dimensional inner product space, and let T be a linear operator on V. Prove that if T is invertible, then T* is invertible and (T*)1=(T1)*.

  8. Prove that if V=WW and T is the projection on W along W, then T=T*. Hint: Recall that N(T)=W. (For definitions, see the exercises of Sections 1.3 and (a) 2.1.)

  9. Let T be a linear operator on an inner product space V. Prove that ||T(x)||=||x|| for all xV if and only if T(x), T(y)=x, y for all x, yV. Hint: Use Exercise 20 of Section 6.1.

  10. For a linear operator T on an inner product space V, prove that T*T=T0 implies T=T0. Is the same result true if we assume that TT*=T0?

  11. Let V be an inner product space, and let T be a linear operator on V. Prove the following results.

    1. (a) R(T*)=N(T).

    2. (b) If V is finite-dimensional, then R(T*)=N(T)?. Hint: Use Exercise 13(c) of Section 6.2.

  12. Let T be a linear operator on a finite-dimensional inner product space V. Prove the following results.

    1. (a) N(T*T)=N(T). Deduce that rank(T*T)=rank(T).

    2. (b) rank(T)=rank(T*). Deduce from (a) that rank(TT*)=rank(T).

    3. (c) For any n×n matrix A, rank(A*A)=rank(AA*)=rank(A).

  13. Let V be an inner product space, and let y, zV. Define T: VV by T(x)=x, yz for all xV. First prove that T is linear. Then show that T* exists, and find an explicit expression for it.

The following definition is used in Exercises 15-17 and is an extension of the definition of the adjoint of a linear operator.

Definition.

Let T: VW be a linear transformation, where V and W are finite-dimensional inner product spaces with inner products , 1 and , 2, respectively. A function T*:WV is called an adjoint of T if T(x), y2=x, T*(y)1 for all xV and yW.

  1. Let T: VW be a linear transformation, where V and W are finite-dimensional inner product spaces with inner products , 1 and , 2, respectively. Prove the following results.

    1. (a) There is a unique adjoint T* of T, and T* is linear.

    2. (b) If β and γ are orthonormal bases for V and W, respectively, then [T*]βγ=([T]γβ)*.

    3. (c) rank(T*)=rank(T).

    4. (d) T*(x), y1=x, T(y)2 for all xW and yV.

    5. (e) For all xV, T*T(x)=0 if and only if T(x)=0.

  2. State and prove a result that extends the first four parts of Theorem 6.11 using the preceding definition.

  3. Let T: VW be a linear transformation, where V and W are finite-dimensional inner product spaces. Prove that (R(T*))=N(T), using the preceding definition.

  4. † Let A be an n×n matrix. Prove that det(A*)=¯det(A). Visit goo.gl/csqoFY for a solution.

  5. Suppose that A is an m×n matrix in which no two columns are identical. Prove that A* A is a diagonal matrix if and only if every pair of columns of A is orthogonal.

  6. For each of the sets of data that follows, use the least squares approximation to find the best fits with both (i) a linear function and (ii) a quadratic function. Compute the error E in both cases.

    1. (a) {(3, 9), (2, 6), (0, 2), (1, 1)}

    2. (b) {(1, 2), (3,4), (5, 7), (7, 9), (9,12)}

    3. (c) {(2, 4), (1, 3), (0, 1), (1, 1), (2, 3)}

  7. In physics, Hooke’s law states that (within certain limits) there is a linear relationship between the length x of a spring and the force y applied to (or exerted by) the spring. That is, y=cx+d, where c is called the spring constant. Use the following data to estimate the spring constant (the length is given in inches and the force is given in pounds).

    Length x Force y
    3.5 1.0
    4.0 2.2
    4.5 2.8
    5.0 4.3
  8. Find the minimal solution to each of the following systems of linear equations.

    1. (a) x+2yz=12

    2. (b) x+2yz=12x+3y+z=24x+7yz=4

    3. (c) x+yz=02xy+z=3xy+z=2

    4. (d) x+y+zw=12xy+w=1

  9. Consider the problem of finding the least squares line y=ct+d corresponding to the m observations (t1, y1), (t2, y2), , (tm, ym).

    1. (a) Show that the equation (A*A)x0=A*y of Theorem 6.12 takes the form of the normal equations:

      (mi=1t2i)c+(mi=1ti)d=mi=1tiyi

      and

      (mi=1ti)c+md=mi=1yi.

      These equations may also be obtained from the error E by setting the partial derivatives of E with respect to both c and d equal to zero.

    2. (b) Use the second normal equation of (a) to show that the least squares line must pass through the center of mass, (ˉt, ˉy), where

      ˉt=1mmi=1tiandˉy=1mmi=1yi.
  10. Let V and {e1, e2, } be defined as in Exercise 23 of Section 6.2. Define T: VV by

    T(σ)(k)=i=kσ(i)for every positive integer k.

    Notice that the infinite series in the definition of T converges because σ(i)0 for only finitely many i.

    1. (a) Prove that T is a linear operator on V.

    2. (b) Prove that for any positive integer n, T(en)=ni=1ei.

    3. (c) Prove that T has no adjoint. Hint: By way of contradiction, suppose that T* exists. Prove that for any positive integer n, T*(en)(k)0 for infinitely many k.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.137.240