You see things that are and say ‘Why?’ But I dream things that never were and say ‘Why not?’
George Bernard Shaw
In Chapters 2 and 3, while discussing the formation of a wave-packet, which represents a moving particle, we found the wave packet might be formed out of a large number of plane wave components. In fact, the state of a particle may be represented by a simple plane wave or by a combination of two or large number of plane waves (in the form of a wave packet) depending upon the form of the potential in which the particle is moving. The state of the particle is obtained by solving the corresponding Schrodinger equation of the particle. The Schrodinger equation [Eq. (4.25)] is a linear equation and, therefore, the solutions (i.e., the eigenstates of the particle) obey the principle of linear superposition. [In classical case, we have the principle of superposition for (classical) waves in a linear medium]. Therefore, if ψ1 and ψ2 are the solutions of a Schrodinger equation (i.e., are the eigenstates of the Hamiltonian operator), the other solutions can be constructed of the form
where c1 and c2 are arbitrary constants.
The situation seems to be analogous to the vectors in ordinary space, where any vector R may be expressed as a linear combination of unit vectors e1, e2, e3. In this Chapter, we shall explore this analogy and shall observe that the set of eigenstates {ψn} of a Hermitian operator act in the so-called linear vector space at the same footing as unit vectors e1, e2, e3 do in the ordinary space.
In the next Section, we shall discuss some of the characteristic features of the eigenstates of the Hermitian operators.
Let us consider the eigenstates of energy operator (i.e., the Hamiltonian). While solving the Schrodinger equation of a particle in a simple one-dimensional potential (like particle in a potential box and particle in a linear harmonic potential), we found expressions of the set of allowed eigenstates. For example, in case of a particle in a box with rigid boundary conditions, we found eigenstates
These eigenstates have orthonormal properties, expressed as
Let us try to do a simple mathematical exercise. Let us see, what do we get if we take a linear combination of all wave functions of the set {ψn}. Let us call this sum as f(x).
If we borrow results from Fourier series analysis of functions, the form [Eq. (7.4)] suggests that any functions f(x) within the interval [0, L] satisfying the condition f(0) = f(L) = 0, may be expressed as a linear combination of functions sin (nπx)/L and, therefore, as a linear combination of eigenstates ψn(x). For a given function f(x), we may find out the coefficients an as follows:
From Eq. (7.4b) we get
so
or
Let us take a second example of a particle in a box with periodic boundary conditions (discussed in Section 5.5). We make a slight change here. We consider the potential box of width 2L, extending from x = –L to x = L.
The eigenstates, now, are
Again, these eigenstates have orthonormal properties, expressed as
and
Let us take a linear combination of all wave functions and call this sum as f(x)
On multiplying both sides of Eq. (7.8b) by and integrating, we get
Therefore, the coefficients bn in Eq. (7.8a) are given by
Equation (7.8b) is nothing but the complex form of the Fourier series, where any function f(x) (in the interval [–L, L]) which has periodic behaviour f(x + 2L) = f(x), may be expressed as the linear combination of complex exponential functions ei(nπx/L) [that is, may be expressed as the linear combination of eigenstates ψn(x) of Eq. (7.6)], the expansion coefficients bn being given by Eq. (7.10). In the next section, firstly we shall rewrite Eqs (7.6), (7.8), and (7.10) in Dirac notations, and then shall discuss their analogy with the expansion of vectors in ordinary space in terms of unit vectors e1, e2, e3.
Let us start with the integral
and try to write it in a compact notation, say
or
or
This way, we have really written the integral (7.11a) in Dirac bra and ket notations.
To be more explicit in writing Dirac notations follow the steps:
(i) Start with |
bracket ( ) |
(ii) replace c by vertical line |
bra | ket ( | ) |
(iii) write for ( | ) |
|
(iv) call as bra and as ket. |
|
Now (i) let the wave function ψn(x) be denoted as or simply as (ii) let the complex conjugate of ψn(x), be denoted as or simply as and (iii) the integral be denoted as We have reached the Dirac notations.
Let us consider the example of a particle in a one-dimensional box with periodic boundary conditions and rewrite Eqs (7.6) to (7.8) in Dirac notations:
Now let us consider the usual three-dimensional direct space (we call it as ordinary space) expressed in Cartesian coordinate system. Let the three mutually perpendicular coordinate axes be denoted by q1, q2, q3 and unit vectors along these be denoted by e1, e2, e3. Then we have
Also any (position) vector R may be expressed as
If we compare the structure of Eqs (7.13a), (7.13b), and (7.14) with those of Eqs (7.15a), (7.15b), and (7.16), we find similarities. For example, the three unit vectors ei
Similarly, the eigenfunctions
Then it is evident that the set of eigenfunctions may be treated as basis (unit) vectors in the hypothetical finite (or infinite-) dimensional space (let us call it as wave function space or quantum mechanical vector space or merely linear vector space); these unit vectors are having similar characteristics in linear vector space as ordinary unit vectors (ei, e2, e3) have in ordinary vector space. The unit vectors are generally termed as state vectors or ket vectors in the (hypothetical) linear vector space. The function formed out of linear combination of unit vectors is also a state vector in this linear vector space. The linear vector space when complete is also called as Hilbert space.
We may say, whereas
Unit vectors e1, e2, e3 live in ordinary 3-dimensional space
in quantum mechanics.
Wave functions ψn(r) live in Hilbert space
or
State vectors live in linear vector space.
It may be mentioned here, explicitly, that the mathematical definition of orthonormality of basis (unit) vectors in ordinary space and that of orthonormality of state vectors (or basis vectors) in linear vector space, are different. For example
and
But these respective definitions of orthonormality [on one hand, orthonormality of unit vectors e1, e2, e3 in ordinary space, and on other hand, orthonormality of state vectors in linear vector space] lead to similar concepts:
(i) expressing a vector R in ordinary space as a linear combination of its unit vectors e1, e2, e3, and expressing a state vector in linear vector space, as a linear combination of its unit (or basis) state vectors (ii) dot product (or scalar product) of two vectors R and S in ordinary space, where
is
and scalar product or inner product of two state vectors and in same linear vector space, where
and
is
The inner product of two state vectors is, in general, a complex number. The eigenstates of an operator (say Â) form a complete set (we had seen in Section 5.4 that the eigenstates [ψn(x)]of a particle in a one-dimensional potential box form a complete set). From Eq. (7.18a), we have
So, we may write
or
This is the completeness condition
You remember we started with the energy eigenstates of a particle in a one-dimensional potential box ψn(x) and argued that these states (in the form of eigenkets ) may be thought of as unit (or basis) vectors in the linear vector space (at the same footing as unit vectors e1, e2, e3 in ordinary space).
It should be always remembered that the basis states are, in fact, functions (instead of three-dimensional vectors). It is only the similarity between the functions and the unit vectors e1, e2, e3 (in ordinary space) that the functions are called unit (or basis) vectors (in linear vector space). We could even consider the (energy) eigenstates of a particle in a three-dimensional potential V(r), and then the set of these eigenstates ψnx, ny, nz (r) could form the set of unit vectors in a different linear vector space. So, in general, we may consider any (linear) operator  and from its eigenvalue equation can get a set of allowed eigenstates or eigenkets with eigenvalues an:
An operator  is said to be linear, if it satisfies the equation
A state vector, say may not be an eigenstate of operator Â. Then operator  operating on transforms it into another state , like
Equation (7.23) shows linear operator  operating on a ket state .  may also operate on a bra state . Then the convention is that the bra is put on the left of operator  and the operation is defined like
In fact, if we take Hermitian conjugate of Eq. (7.23), we will get Eq. (7.24) as  is a Hermitian operator and bra is Hermitian conjugate of ket (see Section 7.6).
The transformation like Eq. (7.23), where state vector transforms to vector through the application of operator is said to be a linear transformation if it satisfies.
An operator  is said to be identity operator if
for any state in the vector space. An operator  is said to be a null operator (or zero operator) if
for any in the vector space. The sum or difference of two operators  and is defined as
The product of two operators  and is defined as
where
Now
where
It is clear from Eq. (7.29) and (7.31) that in general
In fact, it was already pointed out in Chapter 4 that linear operators do not necessarily commute.
Let us firstly look into the transformations in the three-dimensional ordinary space. Consider a position vector R expressed in coordinate system (with unit vectors e1, e2, e3) as
Keeping the coordinate system intact, let us rotate vector R (clockwise) about e3-axis through an angle ϕ (see Figure 7.1). Then the vector, now called as R′, may be written as
where
Figure 7.1 Cartesian coordinate system with three mutually perpendicular unit vectors e1, e2, e3 along three axes, q1, q2, q3. Vector R is rotated (clockwise) about q3-axis through an angle ϕ (keeping the axes intact). This is called the active rotation
or
where
is the rotation matrix or rotation operator. The operator is said to transform vector R into the vector R′.
Let denote the set of orthonormal basis states in a linear vector space of N dimensions (N may be finite or infinite). Any state vector in this vector space may be represented in the form,
where
We now consider an operator equation representing a linear transformation
where  is a (linear) operator in this vector space. State vector may be expressed as
Equation (7.38) may be written in terms of basis vectors as
or
Taking scalar product of both sides with bra we get
or
or
This Eq. (7.41) expresses linear transformation [Eq. (7.38)] by connecting various coefficients cjs of state vector to coefficients dis of state vector through transformation operator Â.
The quantities
are called the matrix elements of the operator  in the basis . Equation (7.41) may be written as
This equation relates the coefficients Cj, which define uniquely the state vector [Eq. (7.36)] to the coefficients di which define uniquely the state vector [Eq. (7.39)]. Thus Eq. (7.41) is completely equivalent to the linear transformation [Eq. (7.38)]. The set of matrix elements aij specify completely the operator  in the basis . Equation (7.43) really represents a set of N linear algebraic equations as follows
This set of equations may be written as a matrix equation,
In linear vector space, the linear transformation [Eq. (7.38)] has been expressed in matrix form in Eq. (7.45) in the basis . This transformation in which operator  transforms state vector is similar to the linear transformation in ordinary space where operator transforms vector R to vector R′ [Eqs (7.35)]. In matrix representation, the ket vectors and are written in the form of column matrices, whereas the operator  is expressed in the form of square matrix. The members aij are the elements of matrix form of operator Â. So
We have seen previously that in quantum mechanics, the state vectors may be written in the form of row or column matrix. For example, a state vector (ket vector) in a linear vector space (of N-dimensions) described by the orthonormal basis set
may be rewritten as one-column matrix
Similarly, the vector
may be rewritten as one-row matrix
An operator  in the linear vector space spanned by the complete orthonormal basis set may be expressed as square matrix
where
Other matrices related to  = (aij), which occur frequently in quantum mechanical discussions, are:
1. Complex conjugate of Â:
2. Transpose of Â:
3. Hermitian conjugate of Â:
which is obtained by conjugation and transposition of the elements.
4. Inverse of Â:
The transformation
or
may be inverted, provided Eqs (7.23a) may be solved for the components in terms of the , which is possible if and only if the determinant of the coefficients aij is not zero, that is,
In this case, we may write
where the (operator) matrix –1 has the elements
with αij as co-factor of the element aij of Â. Properties of all these special matrices are summarized in Table 7.1.
We had already discussed in Section 4.8 that the Hamiltonian of a particle in a real potential is a Hermitian operator. We shall discuss here some characteristics of the Hermitian operators. As mentioned previously and shown in Table 7.1, an operator  is Hermitian if
which means
or
or
Table 7.1 Matrix Properties/Operator Properties
Let us now check the hermiticity of a few quantities:
Example 1 Let  = b (b is a real number)
So b is Hermitian.
Example 2 Let  = V(x) [V(x) is a real function of x]
So V(x) is Hermitian.
Integrating by parts, we get
The first term vanishes as the wave functions ψni and ψnj, if square integrable, vanish at x = ± ∞. So
So ∂ / ∂x is not Hermitian.
Example 4 Let  = –iħ∂/∂x
So is a Hermitian operator. Similarly, and are also Hermitian. Therefore, is also Hermitian. It follows, then, that the free particle Hamiltonian, Ĥ, is Hermitian.
For a particle in a potential field V(r), [where V(r) is real], the Hamiltonian
is Hermitian.
Even at the cost of repetition, we shall discuss the following theorems.
The eigenvalues of a Hermitian operator are real.
▷ Proof
Let be an eigenstate of a Hermitian operator Â, with eigenvalue λ.
Then,
(as  is Hermitian)
or
so λ = λ* and hence λ is real.
The eigenstates of a Hermitian operator Â, belonging to distinct eigenvalues are orthogonal.
▷ Proof
Let and be eigenstate of  with eigenvalues λ and β, respectively. So
and
Taking scalar product of Eqs (7.62a) and (7.62b) with eigenstates and , respectively, we get
and
As operator  is Hermitian, Eq. (7.63b) gives
Taking difference of (7.63a) and (7.63c), one has
As λ ≠ β, one is left only with the option
So eigenstates and are orthogonal.
As discussed in Section 7.3, let us consider a position vector R
in the three-dimensional direct space described by mutually perpendicular axes q1, q2, q3 (with unit vectors e1, e2, e3 along these axes) (Figure 7.2). Let us now rotate the Cartesian coordinate system anti-clockwise, about q3-axis through an angle ϕ (the vector R is not touched). Let us call the new axes as q′1, q′2, q′3 and the unit vectors along these as e′1, e′2, e′3, respectively. It can be easily checked that
The set of three equations (7.65a) may be written as
where qij are the matrix elements of the rotation matrix
Also, if vector R is now expressed in the new (rotated) coordinate system as
it can be easily seen that
Figure 7.2 Cartesian coordinate system. The coordinate system is rotated (anti-clockwise) about q3-axis through an angle ϕ (keeping vector R untouched).
This is called passive rotation
In fact, the relation between orthogonal set of unit vectors and e1, e2, e3 may be obtained for any general rotation of coordinate system. Similarly, the relation between the components of a position vector in rotated and original coordinate systems may be obtained for any general rotation. It can be easily checked that
and for any two vectors R and S
Let us now consider a linear vector space of N-dimensions spanned by the complete orthonormal set of state vectors . A new basis set may be constructed in this linear vector space by forming N linear combinations of the orthonormal basis vectors (similar to the case of forming new unit vectors as linear combinations of unit vectors ej’s in three-dimensional ordinary space). Let the new basis vectors be denoted by . Then
where uij are complex numbers to be chosen in such a way that the new basis set is orthonormal.
The condition of orthonormality on the new basis set gives
Substituting from Eq. (7.69), the condition (7.70) gives
or
This relation leads to the matrix Ȗ = (uij) to satisfy the relation
which is, in fact, the definition of the unitary matrix. So the complex numbers uij appearing in the transformation equation [Eq. (7.69)] are the matrix elements of a unitary matrix Ȗ. Therefore, the transformation given by, Eq. (7.69) is called as unitary transformation. Such a unitary transformation is the mechanism of substituting the basis set .
Let us consider a state vector , represented with respect to the basis set by
and with respect to the basis set by
Since these two expressions (7.73) and (7.74) are representative of one and the same state vector , we have
Multiplying both sides (from left) by the ket vector , we get
or
or
We know, matrix is unitary, so we have
or
This is the equation, which provides the connection between the components bi and of the state vector with respect to the basis and
It may be emphasized here that a state vector has an identity independent of the basis with respect to which its components are given in the linear vector space. The situation is just like that of a position vector R in a coordinate space. Vector R has an identity independent of the coordinate system (may be cartesian coordinate system or spherical polar coordinate system or cylindrical coordinate system or any other curvilinear coordinate system) with respect to which its components are given.
Let us now consider an operator  in the linear vector space. Let this operator be specified by the matrix, with matrix–elements aij with respect to the basis set ,
where
Let this operator  be expressed through the matrix elements with respect to the basis set , where we start calling it as Â′, that is
where
while the basis vectors and are related through the unitary transformation [Eq. (7.69)]. It is easy to find the relations between aij and
The set of equations (7.82) may be written in operator form as
In this chapter, we have discussed about the linear vector space corresponding to a single particle system; a particle in some potential field having eigenstates ψn(x). The linear vector space of a particle is spanned by the set of all its orthonormal eigenkets . In fact, the quantum number n may represent a set of two quantum numbers nx and ny—for a particle in two-dimensional potential field—or may represent a set of three quantum numbers nx, ny, and nz—for a particle in three-dimensional potential field. The linear vector space may be of finite dimension or of infinite dimension (i.e., the set may be finite or infinite). Let us for the moment assume it to be of finite dimension N. Now the linear vector space of a system of two particles (not necessarily in the same potential field) may be regarded as tensor product (or direct product) of the linear vector spaces of each particle. The tensor product space is formed out of the two independent and unrelated vector spaces that are spanned by the basis vectors of two individual particles; say by basis vectors sets and . The basis vectors of the tensor product space are constructed as
or
If N1 and N2 are the dimensions of the two individual vector spaces, the tensor product space has dimensions N1 × N2. Let us take a simple example with N1 = 2 and N2 = 3. So the tensor product space is of dimension 2 × 3 = 6. The six basis vectors of the product space are
To be more general, the tensor product space is formed out of two independent and unrelated vector spaces each spanned by their basis vectors. Suppose V and W are linear vector spaces of dimensions M and N, respectively. Then the tensor product of V and W, written as V ⊗ W (and read as ‘V tensor W’) is a MN dimensional vector space. We may say, the tensor product is a way of putting vector spaces together in order to form larger vector spaces. Let be the set of M basis vectors of vector space V and be the set of N basis vectors of vector space W. Then the set of basis vectors of space V ⊗ W is , Generally, the abbreviated notations , or even are used for the tensor product . Let us take a simple example with M = 2 and N = 3. In terms of column vectors, the two basis vectors and of space V are written as
Similarly, the three basis vectors and of space W are written as
The six basis vectors of space V ⊗ W are written as
We can similarly write the tensor product of two vector states
and (in space V and W respectively)
as
Let us now look at what sorts of linear operators act on vector space V ⊗ W? Suppose and are vectors in space V and W, respectively. Also suppose  and are linear operators on space V and W, respectively. Then we define a linear operator (called as tensor product operator) on space as follows:
and if operators  and are expressed in matrix representation as
Then
Here, for example, term denote N × N submatrix whose (matrix) elements are the matrix elements of each multiplied by a11. For example, for two-dimensional V space and three-dimensional W space, we have operators  and as
then operator on 6-dimensional space V ⊗ W is
Let us discuss about the outer product notations for operators. The outer product of two state vectors and is a linear transformation operator. The outer product is really equivalent to the matrix . For example, let us consider the state vectors and in N-dimensional vector space, as expressed in Eqs (7.36) and (7.39), that is
and
The vectors and may be written in column matrix form (in the basis ) as
The outer product of and may then be expressed as
Alternatively, we may follow the steps given below, to arrive at the matrix form (7.96):
From Eqs. (7.36a) and (7.39), we have
So the (l, k) matrix element of operator in the basis is
which is same as Eq. (7. 96)
Let us consider the two-dimensional vector space (e.g., vector space of spin particles, which we shall study in detail in Chapter 12). The basis vectors, generally, are taken as eigenstates of spin–angular momentum operator Ŝz.
The state vector is also written as or . Similarly, is also written or . We have the following observations:
1.
which is same as requirement of completeness [Eq. (7.20)].
2.
and we may write
so
and
Therefore,
We observe the operator operating on changes it to
and on gives null state
Similarly, operator operating on gives state while operating on gives null state.
Exercise 7.1
Show that
Show that
are all Hermitian for any operator Â.
Consider two Hermitian operators  and . Show that is Hermitian only if  and commute.
Exercise 7.4
If  is a Hermitian operator, show that ei is unitary operator.
Exercise 7.5
Show that is a unitary operator.
Exercise 7.6
Show that any operator  may be expressed as the linear combination of two Hermitian operators.
Exercise 7.7
If  and are Hermitian, which of the following are Hermitian?
Exercise 7.8
If = † Â, show that the trace of matrix is positive definite unless  is a null matrix, in which case tr = 0.
Exercise 7.9
If  and are two non-commuting Hermitian operators and show that is Hermitian.
Solution 7.1
(b) Let us take (i, j) th element of L.H.S.
So
Solution 7.2
From the definition of Hermitian conjugate of an operator Â, we can easily check that
so, we have
Solution 7.3
Solution 7.4
Let
So
Therefore, is unitary operator.
Solution 7.5
An operator  is unitary if
 † = I (an identity operator)
For
so
Solution 7.6
Any operator  may be decomposed as
We know from Solution (7.2) that each part ( + †) and i( – †) is Hermitian. So any operator  may be decomposed into two Hermitian parts.
Solution 7.7
so Hermitian.
Solution 7.8
Solution 7.9
Then
so is Hermitian.
3.128.255.24