Chapter 7

The Linear Vector Space

You see things that are and say ‘Why?’ But I dream things that never were and say ‘Why not?’

George Bernard Shaw

7.1  INTRODUCTION

In Chapters 2 and 3, while discussing the formation of a wave-packet, which represents a moving particle, we found the wave packet might be formed out of a large number of plane wave components. In fact, the state of a particle may be represented by a simple plane wave or by a combination of two or large number of plane waves (in the form of a wave packet) depending upon the form of the potential in which the particle is moving. The state of the particle is obtained by solving the corresponding Schrodinger equation of the particle. The Schrodinger equation [Eq. (4.25)] is a linear equation and, therefore, the solutions (i.e., the eigenstates of the particle) obey the principle of linear superposition. [In classical case, we have the principle of superposition for (classical) waves in a linear medium]. Therefore, if ψ1 and ψ2 are the solutions of a Schrodinger equation (i.e., are the eigenstates of the Hamiltonian operator), the other solutions can be constructed of the form

 

ψ = c1ψl + c2ψ2        (7.1)

where c1 and c2 are arbitrary constants.

The situation seems to be analogous to the vectors in ordinary space, where any vector R may be expressed as a linear combination of unit vectors e1, e2, e3. In this Chapter, we shall explore this analogy and shall observe that the set of eigenstates {ψn} of a Hermitian operator act in the so-called linear vector space at the same footing as unit vectors e1, e2, e3 do in the ordinary space.

In the next Section, we shall discuss some of the characteristic features of the eigenstates of the Hermitian operators.

7.2  SOME CHARACTERISTICS OF EIGENSTATES OF HERMITIAN OPERATORS

Let us consider the eigenstates of energy operator (i.e., the Hamiltonian). While solving the Schrodinger equation of a particle in a simple one-dimensional potential (like particle in a potential box and particle in a linear harmonic potential), we found expressions of the set of allowed eigenstates. For example, in case of a particle in a box with rigid boundary conditions, we found eigenstates

equation

These eigenstates have orthonormal properties, expressed as

equation
equation

Let us try to do a simple mathematical exercise. Let us see, what do we get if we take a linear combination of all wave functions of the set {ψn}. Let us call this sum as f(x).

equation
equation

If we borrow results from Fourier series analysis of functions, the form [Eq. (7.4)] suggests that any functions f(x) within the interval [0, L] satisfying the condition f(0) = f(L) = 0, may be expressed as a linear combination of functions sin (nπx)/L and, therefore, as a linear combination of eigenstates ψn(x). For a given function f(x), we may find out the coefficients an as follows:

From Eq. (7.4b) we get

equation

so

equation

or

equation

Let us take a second example of a particle in a box with periodic boundary conditions (discussed in Section 5.5). We make a slight change here. We consider the potential box of width 2L, extending from x = –L to x = L.

The eigenstates, now, are

equation

Again, these eigenstates have orthonormal properties, expressed as

equation

and

equation

Let us take a linear combination of all wave functions and call this sum as f(x)

equation
equation

On multiplying both sides of Eq. (7.8b) by equation and integrating, we get

equation

Therefore, the coefficients bn in Eq. (7.8a) are given by

equation

Equation (7.8b) is nothing but the complex form of the Fourier series, where any function f(x) (in the interval [–L, L]) which has periodic behaviour f(x + 2L) = f(x), may be expressed as the linear combination of complex exponential functions ei(nπx/L) [that is, may be expressed as the linear combination of eigenstates ψn(x) of Eq. (7.6)], the expansion coefficients bn being given by Eq. (7.10). In the next section, firstly we shall rewrite Eqs (7.6), (7.8), and (7.10) in Dirac notations, and then shall discuss their analogy with the expansion of vectors in ordinary space in terms of unit vectors e1, e2, e3.

7.3  DIRAC BRA AND KET NOTATIONS

Let us start with the integral

equation

and try to write it in a compact notation, say

 

(ψnm)        (7.11b)

or

equation

or

equation

This way, we have really written the integral (7.11a) in Dirac bra and ket notations.

To be more explicit in writing Dirac notations follow the steps:

 

(i) Start with

bracket ( )

(ii) replace c by vertical line

bra | ket ( | )

(iii) write equation for ( | )

 

(iv) call equation as bra and equation as ket.

 

Now (i) let the wave function ψn(x) be denoted as equation or simply asequation (ii) let equation the complex conjugate of ψn(x), be denoted as equation or simply as equation and (iii) the integral equation be denoted as equation We have reached the Dirac notations.

Let us consider the example of a particle in a one-dimensional box with periodic boundary conditions and rewrite Eqs (7.6) to (7.8) in Dirac notations:

equation
equation
equation
equation

Now let us consider the usual three-dimensional direct space (we call it as ordinary space) expressed in Cartesian coordinate system. Let the three mutually perpendicular coordinate axes be denoted by q1, q2, q3 and unit vectors along these be denoted by e1, e2, e3. Then we have

 

ei · ei = 1 (i = 1, 2, 3)        (7.15a)
ei · ej = 0   for   ij        (7.15b)

Also any (position) vector R may be expressed as

 

R = R1 e1 + R2 e2 + R3 e3
equation

If we compare the structure of Eqs (7.13a), (7.13b), and (7.14) with those of Eqs (7.15a), (7.15b), and (7.16), we find similarities. For example, the three unit vectors ei

  1. are of unit length (dot product ei · ei = 1)
  2. are normal to each other (dot product ei · ej = 0 for i ≠ j)
  3. are able to express any vector R in terms of their linear combinations.

Similarly, the eigenfunctions equation

  1. are normalized to unity equation
  2. are orthogonal to each other ( equation for n ≠ m)
  3. are able to express any function equation in terms of their linear combination.

Then it is evident that the set of eigenfunctions equation may be treated as basis (unit) vectors in the hypothetical finite (or infinite-) dimensional space (let us call it as wave function space or quantum mechanical vector space or merely linear vector space); these unit vectors are having similar characteristics in linear vector space as ordinary unit vectors (ei, e2, e3) have in ordinary vector space. The unit vectors equation are generally termed as state vectors or ket vectors in the (hypothetical) linear vector space. The function equation formed out of linear combination of unit vectors equation is also a state vector in this linear vector space. The linear vector space when complete is also called as Hilbert space.

We may say, whereas

 

Unit vectors e1, e2, e3 live in ordinary 3-dimensional space

in quantum mechanics.

 

Wave functions ψn(r) live in Hilbert space

or

 

State vectors equation live in linear vector space.

It may be mentioned here, explicitly, that the mathematical definition of orthonormality of basis (unit) vectors in ordinary space and that of orthonormality of state vectors (or basis vectors) in linear vector space, are different. For example

equation

and

equation

But these respective definitions of orthonormality [on one hand, orthonormality of unit vectors e1, e2, e3 in ordinary space, and on other hand, orthonormality of state vectors equation in linear vector space] lead to similar concepts:

(i) expressing a vector R in ordinary space as a linear combination of its unit vectors e1, e2, e3, and expressing a state vector equation in linear vector space, as a linear combination of its unit (or basis) state vectors equation (ii) dot product (or scalar product) of two vectors R and S in ordinary space, where

equation

is

equation

and scalar product or inner product of two state vectors equation and equation in same linear vector space, where

equation

and

equation

is

equation

The inner product of two state vectors is, in general, a complex number. The eigenstates equation of an operator (say Â) form a complete set (we had seen in Section 5.4 that the eigenstates [ψn(x)]of a particle in a one-dimensional potential box form a complete set). From Eq. (7.18a), we have

equation

So, we may write

equation

or

equation

This is the completeness condition

7.4  MORE ABOUT BRA, KET VECTORS AND LINEAR VECTOR SPACE

You remember we started with the energy eigenstates of a particle in a one-dimensional potential box ψn(x) and argued that these states (in the form of eigenkets equation) may be thought of as unit (or basis) vectors in the linear vector space (at the same footing as unit vectors e1, e2, e3 in ordinary space).

It should be always remembered that the basis states equation are, in fact, functions (instead of three-dimensional vectors). It is only the similarity between the functions equation and the unit vectors e1, e2, e3 (in ordinary space) that the functions equation are called unit (or basis) vectors (in linear vector space). We could even consider the (energy) eigenstates of a particle in a three-dimensional potential V(r), and then the set of these eigenstates ψnx, ny, nz (r) could form the set of unit vectors in a different linear vector space. So, in general, we may consider any (linear) operator  and from its eigenvalue equation can get a set of allowed eigenstates or eigenkets equation with eigenvalues an:

equation

An operator  is said to be linear, if it satisfies the equation

equation

A state vector, say equation may not be an eigenstate of operator Â. Then operator  operating on equation transforms it into another state equation, like

equation

Equation (7.23) shows linear operator  operating on a ket state equation.  may also operate on a bra state equation. Then the convention is that the bra equation is put on the left of operator  and the operation is defined like

equation

In fact, if we take Hermitian conjugate of Eq. (7.23), we will get Eq. (7.24) as  is a Hermitian operator and bra equation is Hermitian conjugate of ket equation (see Section 7.6).

The transformation like Eq. (7.23), where state vector equation transforms to vector equation through the application of operatorequation is said to be a linear transformation if it satisfies.

equation

An operator  is said to be identity operator if

equation

for any state equation in the vector space. An operator  is said to be a null operator (or zero operator) if

equation

for any equation in the vector space. The sum or difference of two operators  and equation is defined as

equation

The product of two operators  and equation is defined as

equation

where

equation

Now

equation

where

equation

It is clear from Eq. (7.29) and (7.31) that in general

equation

In fact, it was already pointed out in Chapter 4 that linear operators do not necessarily commute.

7.5  MATRIX REPRESENTATION OF STATE VECTORS AND OPERATORS

7.5.1  Linear Transformations in Ordinary Space

Let us firstly look into the transformations in the three-dimensional ordinary space. Consider a position vector R expressed in coordinate system (with unit vectors e1, e2, e3) as

 

R = R1 e1 + R2 e2 + R3 e3        (7.34)

Keeping the coordinate system intact, let us rotate vector R (clockwise) about e3-axis through an angle ϕ (see Figure 7.1). Then the vector, now called as R′, may be written as

equation

where

equation
Figure 7.1

Figure 7.1 Cartesian coordinate system with three mutually perpendicular unit vectors e1, e2, e3 along three axes, q1, q2, q3. Vector R is rotated (clockwise) about q3-axis through an angle ϕ (keeping the axes intact). This is called the active rotation

or, in matrix form

equation

or

equation

where

equation

is the rotation matrix or rotation operator. The operator equation is said to transform vector R into the vector R′.

7.5.2  Linear Transformations in Linear Vector Space

Let equation denote the set of orthonormal basis states in a linear vector space of N dimensions (N may be finite or infinite). Any state vector equation in this vector space may be represented in the form,

equation

where

equation

We now consider an operator equation representing a linear transformation

equation

where  is a (linear) operator in this vector space. State vector equation may be expressed as

equation

Equation (7.38) may be written in terms of basis vectors equation as

equation

or

equation

Taking scalar product of both sides with bra equation we get

equation

or

equation

or

equation

This Eq. (7.41) expresses linear transformation [Eq. (7.38)] by connecting various coefficients cjs of state vector equation to coefficients dis of state vector equation through transformation operator Â.

7.5.3  Matrices

The quantities

equation

are called the matrix elements of the operator  in the basis equation. Equation (7.41) may be written as

equation

This equation relates the coefficients Cj, which define uniquely the state vector equation [Eq. (7.36)] to the coefficients di which define uniquely the state vector equation [Eq. (7.39)]. Thus Eq. (7.41) is completely equivalent to the linear transformation [Eq. (7.38)]. The set of matrix elements aij specify completely the operator  in the basis equation. Equation (7.43) really represents a set of N linear algebraic equations as follows

equation

This set of equations may be written as a matrix equation,

equation

In linear vector space, the linear transformation equation [Eq. (7.38)] has been expressed in matrix form in Eq. (7.45) in the basis equation. This transformation in which operator  transforms state vector equation is similar to the linear transformation in ordinary space where operator equation transforms vector R to vector R′ [Eqs (7.35)]. In matrix representation, the ket vectors equation and equation are written in the form of column matrices, whereas the operator  is expressed in the form of square matrix. The members aij are the elements of matrix form of operator Â. So

equation
7.6  SOME SPECIAL MATRICES/OPERATORS

We have seen previously that in quantum mechanics, the state vectors may be written in the form of row or column matrix. For example, a state vector (ket vector) equation in a linear vector space (of N-dimensions) described by the orthonormal basis set equation

equation

may be rewritten as one-column matrix

equation

Similarly, the vector equation

equation

may be rewritten as one-row matrix

equation

An operator  in the linear vector space spanned by the complete orthonormal basis set equation may be expressed as square matrix

equation

where

equation

 

Other matrices related to  = (aij), which occur frequently in quantum mechanical discussions, are:

1. Complex conjugate of Â:

equation

2. Transpose of Â:

equation

3. Hermitian conjugate of Â:

equation

which is obtained by conjugation and transposition of the elements.

4. Inverse of Â:

The transformation

equation

or

equation

may be inverted, provided Eqs (7.23a) may be solved for the components equation in terms of the equation, which is possible if and only if the determinant of the coefficients aij is not zero, that is,

equation

In this case, we may write

equation

where the (operator) matrix –1 has the elements

equation

with αij as co-factor of the element aij of Â. Properties of all these special matrices are summarized in Table 7.1.

7.6.1  Hermitian Operators

We had already discussed in Section 4.8 that the Hamiltonian of a particle in a real potential is a Hermitian operator. We shall discuss here some characteristics of the Hermitian operators. As mentioned previously and shown in Table 7.1, an operator  is Hermitian if

 

 =         (7.55a)

which means

equation

or

equation

or

equation

Table 7.1 Matrix Properties/Operator Properties

equation

Let us now check the hermiticity of a few quantities:

Example 1   Let  = b (b is a real number)

equation
= b δni, nj         (7.56a)
equation
= b [δni, nj]* = b δni, nj        (7.56b)

So b is Hermitian.

Example 2   Let  = V(x) [V(x) is a real function of x]

equation
equation

So V(x) is Hermitian.

Example 3   Let  = ∂/∂x

equation

Integrating by parts, we get

equation

The first term vanishes as the wave functions ψni and ψnj, if square integrable, vanish at x = ± ∞. So

equation

So ∂ / ∂x is not Hermitian.

Example 4   Let  = –∂/∂x

equation
equation

So equation is a Hermitian operator. Similarly, equation and equation are also Hermitian. Therefore, equation is also Hermitian. It follows, then, that the free particle Hamiltonian, Ĥ, is Hermitian.

equation

equation

For a particle in a potential field V(r), [where V(r) is real], the Hamiltonian

equation

is Hermitian.

Even at the cost of repetition, we shall discuss the following theorems.

Theorem 1

The eigenvalues of a Hermitian operator are real.

▷ Proof

Let equation be an eigenstate of a Hermitian operator Â, with eigenvalue λ.

equation

Then,

equation

(as  is Hermitian)

or

equation

so λ = λ* and hence λ is real.

Theorem 2

The eigenstates of a Hermitian operator Â, belonging to distinct eigenvalues are orthogonal.

▷ Proof

Let equation and equation be eigenstate of  with eigenvalues λ and β, respectively. So

equation

and

equation

Taking scalar product of Eqs (7.62a) and (7.62b) with eigenstates equation and equation, respectively, we get

equation

and

equation

As operator  is Hermitian, Eq. (7.63b) gives

equation

Taking difference of (7.63a) and (7.63c), one has

equation

As λβ, one is left only with the option

equation

So eigenstates equation and equation are orthogonal.

7.7  CHANGE OF BASIS: UNITARY TRANSFORMATION

7.7.1  Change of Basis in Ordinary Space: Rotation of Axes

As discussed in Section 7.3, let us consider a position vector R

 

R = R1 e1 + R2 e2 + R3 e3        (Refer to Eq. 7.16)

in the three-dimensional direct space described by mutually perpendicular axes q1, q2, q3 (with unit vectors e1, e2, e3 along these axes) (Figure 7.2). Let us now rotate the Cartesian coordinate system anti-clockwise, about q3-axis through an angle ϕ (the vector R is not touched). Let us call the new axes as q′1, q′2, q′3 and the unit vectors along these as e1, e2, e3, respectively. It can be easily checked that

equation

The set of three equations (7.65a) may be written as

equation

where qij are the matrix elements of the rotation matrix

equation

Also, if vector R is now expressed in the new (rotated) coordinate system as

equation

it can be easily seen that

equation
Figure 7.2

Figure 7.2 Cartesian coordinate system. The coordinate system is rotated (anti-clockwise) about q3-axis through an angle ϕ (keeping vector R untouched).
This is called passive rotation

equation

In fact, the relation between orthogonal set of unit vectorsequation and e1, e2, e3 may be obtained for any general rotation of coordinate system. Similarly, the relation between the components of a position vector in rotated and original coordinate systems may be obtained for any general rotation. It can be easily checked that

 

|R| = |R′|        (7.68a)

and for any two vectors R and S

 

R · S = R′ · S′        (7.68b)

7.7.2  Change of Basis in Linear Vector Space: Unitary Transformation

Let us now consider a linear vector space of N-dimensions spanned by the complete orthonormal set of state vectors equation. A new basis set may be constructed in this linear vector space by forming N linear combinations of the orthonormal basis vectors equation (similar to the case of forming new unit vectors equation as linear combinations of unit vectors ej’s in three-dimensional ordinary space). Let the new basis vectors be denoted by equation. Then

equation

where uij are complex numbers to be chosen in such a way that the new basis set equation is orthonormal.

The condition of orthonormality on the new basis set equation gives

equation

Substituting from Eq. (7.69), the condition (7.70) gives

equation

or

equation

This relation leads to the matrix Ȗ = (uij) to satisfy the relation

 

ȖȖ = 1        (7.72)

which is, in fact, the definition of the unitary matrix. So the complex numbers uij appearing in the transformation equation [Eq. (7.69)] are the matrix elements of a unitary matrix Ȗ. Therefore, the transformation given by, Eq. (7.69) is called as unitary transformation. Such a unitary transformation is the mechanism of substituting the basis set equation.

Let us consider a state vector equation, represented with respect to the basis set equation by

equation

and with respect to the basis set equation by

equation

Since these two expressions (7.73) and (7.74) are representative of one and the same state vector equation, we have

equation

Multiplying both sides (from left) by the ket vector equation, we get

equation

or

equation

or

equation

We know, matrix equation is unitary, so we have

equation

or

equation

This is the equation, which provides the connection between the components bi and equation of the state vector with respect to the basis equation and equation

It may be emphasized here that a state vector equation has an identity independent of the basis with respect to which its components are given in the linear vector space. The situation is just like that of a position vector R in a coordinate space. Vector R has an identity independent of the coordinate system (may be cartesian coordinate system or spherical polar coordinate system or cylindrical coordinate system or any other curvilinear coordinate system) with respect to which its components are given.

Let us now consider an operator  in the linear vector space. Let this operator be specified by the matrix, with matrix–elements aij with respect to the basis set equation,

equation

where

equation

Let this operator  be expressed through the matrix elements equation with respect to the basis set equation, where we start calling it as Â′, that is

equation

where

equation

while the basis vectors equation and equation are related through the unitary transformation [Eq. (7.69)]. It is easy to find the relations between aij and equation

equation

The set of equations (7.82) may be written in operator form as

equation
7.8  TENSOR PRODUCT OR DIRECT PRODUCT OF VECTOR SPACES

In this chapter, we have discussed about the linear vector space corresponding to a single particle system; a particle in some potential field having eigenstates ψn(x). The linear vector space of a particle is spanned by the set of all its orthonormal eigenkets equation. In fact, the quantum number n may represent a set of two quantum numbers nx and ny—for a particle in two-dimensional potential field—or may represent a set of three quantum numbers nx, ny, and nz—for a particle in three-dimensional potential field. The linear vector space may be of finite dimension or of infinite dimension (i.e., the set equation may be finite or infinite). Let us for the moment assume it to be of finite dimension N. Now the linear vector space of a system of two particles (not necessarily in the same potential field) may be regarded as tensor product (or direct product) of the linear vector spaces of each particle. The tensor product space is formed out of the two independent and unrelated vector spaces that are spanned by the basis vectors of two individual particles; say by basis vectors sets equation and equation. The basis vectors of the tensor product space are constructed as

equation

or

equation

If N1 and N2 are the dimensions of the two individual vector spaces, the tensor product space has dimensions N1 × N2. Let us take a simple example with N1 = 2 and N2 = 3. So the tensor product space is of dimension 2 × 3 = 6. The six basis vectors of the product space are

equation
equation
equation

To be more general, the tensor product space is formed out of two independent and unrelated vector spaces each spanned by their basis vectors. Suppose V and W are linear vector spaces of dimensions M and N, respectively. Then the tensor product of V and W, written as V ⊗ W (and read as ‘V tensor W’) is a MN dimensional vector space. We may say, the tensor product is a way of putting vector spaces together in order to form larger vector spaces. Let equation be the set of M basis vectors of vector space V and equation be the set of N basis vectors of vector space W. Then the set of basis vectors of space V ⊗ W is equation, Generally, the abbreviated notations equation, equation or evenequation are used for the tensor product equation. Let us take a simple example with M = 2 and N = 3. In terms of column vectors, the two basis vectors equation and equation of space V are written as

equation

Similarly, the three basis vectors equation and equation of space W are written as

equation

The six basis vectors equation of space V ⊗ W are written as

equation
equation
equation
equation
equation
equation

We can similarly write the tensor product of two vector states

equation and equation (in space V and W respectively)

as

equation

Let us now look at what sorts of linear operators act on vector space V ⊗ W? Suppose equation and equation are vectors in space V and W, respectively. Also suppose  and equation are linear operators on space V and W, respectively. Then we define a linear operator equation (called as tensor product operator) on space equation as follows:

equation

and if operators  and equation are expressed in matrix representation as

equation
equation

Then

equation

Here, for example, term equation denote N × N submatrix whose (matrix) elements are the matrix elements of equation each multiplied by a11. For example, for two-dimensional V space and three-dimensional W space, we have operators  and equation as

equation
equation

then operator equation on 6-dimensional space V ⊗ W is

equation
7.9  OUTER PRODUCT OPERATORS

Let us discuss about the outer product notations for operators. The outer product equation of two state vectors equation and equation is a linear transformation operator. The outer product equation is really equivalent to the matrix equation. For example, let us consider the state vectors equation and equation in N-dimensional vector space, as expressed in Eqs (7.36) and (7.39), that is

equation

and

equation

The vectors equation and equation may be written in column matrix form (in the basis equation) as

equation

The outer product of equation and equation may then be expressed as

equation

Alternatively, we may follow the steps given below, to arrive at the matrix form (7.96):

From Eqs. (7.36a) and (7.39), we have

equation

So the (l, k) matrix element of operator equation in the basis equation is

equation
equation

which is same as Eq. (7. 96)

Let us consider the two-dimensional vector space (e.g., vector space of spin equation particles, which we shall study in detail in Chapter 12). The basis vectors, generally, are taken as eigenstates of spin–angular momentum operator Ŝz.

equation
equation

The state vector equation is also written as equation or equation. Similarly, equation is also written equation or equation. We have the following observations:

1.

equation

which is same as requirement of completeness [Eq. (7.20)].

2.

equation

and we may write

equation

3.

equation
equation

so

equation

and

equation

Therefore,

equation
equation

We observe the operator equationoperating on equationchanges it to equation

equation

and equation on equation gives null state

equation

Similarly, operator equation operating on equation gives equation state while operating on equation gives null state.

EXERCISES

Exercise 7.1

Show that

  1.  
    equation
  2.  
    equation

Exercise 7.2

Show that

  1. Â + Â
  2. i(Â – Â)
  3. Â Â

are all Hermitian for any operator Â.

Exercise 7.3

Consider two Hermitian operators  and equation. Show that equation is Hermitian only if  and equation commute.

Exercise 7.4

If  is a Hermitian operator, show that e is unitary operator.

Exercise 7.5

Show that equation is a unitary operator.

Exercise 7.6

Show that any operator  may be expressed as the linear combination of two Hermitian operators.

Exercise 7.7

If  and equation are Hermitian, which of the following are Hermitian?

  1.  
    equation
  2.  
    equation
  3.  
    equation

Exercise 7.8

If equation =  Â, show that the trace of matrix equation is positive definite unless  is a null matrix, in which case tr equation = 0.

Exercise 7.9

If  and equation are two non-commuting Hermitian operators and equation show that equation is Hermitian.

SOLUTIONS

Solution 7.1

(b) Let us take (i, j) th element of L.H.S.

equation

So

equation

Solution 7.2

From the definition of Hermitian conjugate of an operator Â, we can easily check that

 

) = Â

so, we have

  1. (Â + Â) = Â + (Â) = Â + Â
  2. [i(Â – Â)] = (–i) [Â – (Â)] = – i – Â)
    = i(Â – Â)
  3. (Â Â) = (Â) Â = Â Â

Solution 7.3

equation

Solution 7.4

Let

equation

So

equation

Therefore, equation is unitary operator.

Solution 7.5

An operator  is unitary if

  = I (an identity operator)

For

equation

so

equation

Solution 7.6

Any operator  may be decomposed as

equation

We know from Solution (7.2) that each part ( + Â) and i( – Â) is Hermitian. So any operator  may be decomposed into two Hermitian parts.

Solution 7.7

  1.  
    equation

    so Hermitian.

  2. Not hermitian
  3. Hermitian

Solution 7.8

equation

Solution 7.9

equation

Then

equation

so equation is Hermitian.

REFERENCES
  1. Baym, G. 1969 (Revised 1981). Lectures on Quantum Mechanics. New York: W.A. Benjamin.
  2. Shankar, R. 1980. Principles of Quantum Mechanics. New York: Plenum.
  3. Cohen–Tannoudji, C., Diu, B. and Laloe, F. 1977. Quantum Mechanics. Vol. I, New York: John Wiley.
  4. Ziman, J. M. 1969. Elements of Advanced Quantum Theory. London: Cambridge University Press.
  5. Merzbacher, E. 1999. Quantum Mechanics. 3rd edn., New York: John Wiley.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.255.24