Chapter 9. Direction-Of-Arrival Estimation Algorithms

This chapter provides a detailed overview of the various methods available for estimation of Direction-Of-Arrival (also called Angle-Of-Arrival) of a radio signal using an antenna array. Promising methods for determining the position of a mobile user are also given. The use of adaptive arrays is critical in position location applications [Ken96], and, given the recent mandate by the FCC to require 125m accuracy on wireless emergency calls by the year 2001 [Ree98] [Rap96b], there is intense interest in determining the Direction-Of-Arrival of RF signals in wireless systems.

The array-based Direction-Of-Arrival (DOA) estimation techniques considered here are broadly divided into four different types: conventional techniques, subspace based techniques, maximum likelihood techniques and the integrated techniques which combine property restoral techniques with subspace based approaches. Conventional methods are based on classical beamforming techniques discussed in Chapter 3 and require a large number of elements to achieve high resolution. Subspace based methods are high resolution sub-optimal techniques which exploit the eigen structure of the input data matrix. Maximum likelihood techniques are optimal techniques which perform well even under low signal-to-noise ratio conditions, but are often computationally very intensive. A promising method for CDMA is the integrated approach which uses property-restoral based techniques to separate multiple signals and estimate their spatial signatures from which their Directions-Of-Arrival can be determined using subspace techniques [Rap98].

Conventional Methods for DOA Estimation

Conventional methods for Direction-Of-Arrival estimation are based on the concepts of beamforming and null-steering and do not exploit the nature of the model of the received signal vector u(k) or the statistical model of the signals and noise. Given the knowledge of the array manifold, an array can be steered electronically as described in Chapter 3. Conventional DOA estimation techniques electronically steer beams in all possible directions and look for peaks in the output power [Sch93d]. The conventional methods discussed here are the delay-and-sum method (classical beamformer) and Capon’s minimum variance method.

Delay-and-Sum Method

The delay-and-sum method, also referred to as the classical beamformer method or Fourier method, is one of the simplest techniques for DOA estimation. Figure 9-1 shows the classical narrowband beamformer structure, where the output signal y(k) is given by a linearly weighted sum of the sensor element outputs. That is,

Equation 9.1. 

Illustration of the classical beamforming structure.

Figure 9-1. Illustration of the classical beamforming structure.

The total output power of the conventional beamformer can be expressed as

Equation 9.2. 

where Ruu is the autocorrelation matrix of the array input data as defined in (8.2). Equation (9.2) plays a central role in all of the conventional DOA estimation algorithms. The autocorrelation matrix Ruu contains useful information about both the array response vectors and the signals themselves, and it is possible to estimate signal parameters by careful interpretation of Ruu.

Consider a signal s(k) impinging on the array at an angle φ0. Following the narrowband input data model expressed in (8.5), the power at the beamformer output can be expressed as

Equation 9.3. 

where a(φ0) is the steering vector associated with the DOA angle φ0, n(k) is the noise vector at the array input, and σs = E[s(k)2] and σn = E[n(k)2] are the signal power and noise power, respectively. It is clearly seen from (9.3) that the output power is maximized when w = a0). Therefore, of all the possible weight vectors, the receiver antenna has the highest gain in the direction φ0, when w = a0). This is because w = a0) aligns the phases of the signal components arriving from φ0 at the sensors, causing them to add constructively.

In the classical beamforming approach to DOA estimation, the beam is scanned over the angular region of interest in discrete steps by forming weights w = a(φ) for different φ, and the output power is measured. Using equation (9.3), the output power at the classical beamformer as a function of the Angle-Of-Arrival is given by

Equation 9.4. 

Therefore, if we have an estimate of the input autocorrelation matrix and know the steering vectors a(φ) for all φ’s of interest (either through calibration or analytical computation), it is possible to estimate the output power as a function of the Angle-Of-Arrival φ. The output power as a function of Angle-Of-Arrival is often termed the spatial spectrum. Clearly, the Directions-Of-Arrival can be estimated by locating peaks in the spatial spectrum defined in (9.4).

The delay-and-sum method has many disadvantages. The width of the beam and the height of the sidelobes limit the effectiveness when signals arriving from multiple directions and/or sources are present, because the signals over a wide angular region contribute to the measured average power at each look direction. Hence, this technique has poor resolution. Although it is possible to increase the resolution by adding more sensor elements, increasing the number of sensors increases the number of receivers and the amount of storage required for the calibration data, i.e., a(φ).

Capon’s Minimum Variance Method

The delay-and-sum method works on the premise that pointing the strongest beam in a particular direction yields the best estimate of power arriving in that direction. In other words, all the degrees of freedom available to the array were used in forming a beam in the required look direction. This works well when there is only one signal present. But when there is more than one signal present, the array output power contains contribution from the desired signal as well as the undesired ones from other directions.

Capon’s minimum variance technique [Cap69] attempts to overcome the poor resolution problems associated with the delay-and-sum method. The technique uses some (not all) of the degrees of freedom to form a beam in the desired look direction while simultaneously using the remaining degrees of freedom to form nulls in the direction of interfering signals. This technique minimizes the contribution of the undesired interferences by minimizing the output power while maintaining the gain along the look direction to be constant, usually unity. That is,

Equation 9.5. 

The weight vector obtained by solving (9.5) is often called the minimum variance distortionless response (MVDR) beamformer weights since, for a particular look direction, it minimizes the variance (average power) of the output signal while passing the signal arriving in the look direction without distortion (unity gain and zero phase shift). Equation (9.5) represents a constraint optimization problem which can be solved using the method of Lagrange multipliers. This approach converts the constraint optimization problem into an unconstrained one, thereby allowing the use of Least Squares techniques to determine the solution. Using a Lagrange multiplier, the weight vector that solves (9.5) can be shown to be [Hay91]

Equation 9.6. 

Now the output power of the array as a function of the Angle-Of-Arrival, using Capon’s beamforming method, is given by Capon’s spatial spectrum,

Equation 9.7. 

By computing and plotting Capon’s spectrum over the whole range of φ, the DOA’s can be estimated by locating the peaks in the spectrum.

Although it is not a maximum likelihood (ML) estimator, Capon’s method is sometimes referred to as an ML estimator, since for any choice of φ, PCapon(φ) is the maximum likelihood estimate of the power of a signal arriving from the direction φ in the presence of white Gaussian noise having arbitrary spatial characteristics [Cap79].

Figure 9-2 illustrates the performance improvement obtained by Capon’s method over the delay-and-sum method. Computer simulations show that using a six element uniformly spaced linear array with half wavelength interelement spacing, Capon’s method is able to distinguish between the two signals arriving at 90 and 100 degrees, respectively, while the delay-and-sum method fails to differentiate between the two signals.

Comparison of resolution performance of delay-and-sum method and Capon’s minimum variance method. Two signals of equal power at an SNR of 20 dB arrive at a 6-element uniformly spaced array with an interelement spacing equal to half a wavelength at angles 90 and 100 degrees, respectively.

Figure 9-2. Comparison of resolution performance of delay-and-sum method and Capon’s minimum variance method. Two signals of equal power at an SNR of 20 dB arrive at a 6-element uniformly spaced array with an interelement spacing equal to half a wavelength at angles 90 and 100 degrees, respectively.

Though it provides a better resolution when compared to the delay-and-sum method, Capon’s method suffers from many disadvantages. Capon’s method fails if other signals that are correlated with the Signal-of-Interest are present because it inadvertently uses that correlation to reduce the processor output power without spatially nulling it [Sch93d]. In other words, the correlated components may be combined destructively in the process of minimizing the output power. Also, Capon’s method requires the computation of a matrix inverse, which can be expensive for large arrays.

Subspace Methods for DOA Estimation

Though many of the classical beamforming based methods such as Capon’s minimum variance method are often successful and are widely used, these methods have some fundamental limitations in resolution. Most of these limitations arise because they do not exploit the structure of the input data model given in (8.5). Schmidt [Sch79] and Bienvenu and Kopp [Bie79] were the first to exploit the structure of a more accurate data model for the case of sensor arrays of arbitrary form. Schmidt derived a complete geometric solution to the DOA estimation problem in the absence of noise, and extended the geometric concepts to obtain a reasonable approximation to the solution in the presence of noise. The technique proposed by Schmidt is called the MUltiple SIgnal Classification (MUSIC) algorithm, and it has been thoroughly investigated since its inception [Bar84][Sto89][Pil89a]. The geometric concepts upon which MUSIC is founded form the basis for a much broader class of subspace-based algorithms [Pau93][Joh86]. Apart from MUSIC, the primary contributions to the subspace-based algorithms include the Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT) proposed by Roy et al. [Pau86b][Roy89][Roy90] and the minimum-norm method proposed by Kumaresan and Tufts [Kum83].

The MUSIC Algorithm

The MUSIC algorithm proposed by Schmidt in 1979 [Sch79][Sch86a] is a high resolution multiple signal classification technique based on exploiting the eigenstructure of the input covariance matrix. MUSIC is a signal parameter estimation algorithm which provides information about the number of incident signals, Direction-Of-Arrival (DOA) of each signal, strengths and cross correlations between incident signals, noise power, etc. While the MUSIC algorithm provides very high resolution, it requires very precise and accurate array calibration. The MUSIC algorithm has been implemented and its performance has been experimentally verified [Sch86b].

The development of the MUSIC algorithm is based on a geometric view of the signal parameter estimation problem. Following the narrowband data model discussed in Chapter 8, if there are D signals incident on the array, the received input data vector at an M-element array can be expressed as a linear combination of the D incident waveforms and noise. That is,

Equation 9.8. 

Equation 9.9. 

where sT(t) = [s0(t) s1(t) ... sD − 1(t)] is the vector of incident signals, n(t) = [n0(t) n1(t) ... nD − 1(t)] is the noise vector, and a(φj) is the array steering vector corresponding to the Direction-Of-Arrival of the jth signal. For simplicity, we will drop the time argument from u, s, and n from this point onward.

In geometric terms, the received vector u and the steering vectors a(φj) can be visualized as vectors in M dimensional space. From (9.9), it is seen that the received vector u is a particular linear combination of the array steering vectors, with s0, s1,...,sD-1 being the coefficients of the combination. In terms of the above data model, the input covariance matrix Ruu can be expressed as

Equation 9.10. 

Equation 9.11. 

where Rss is the signal correlation matrix E [ssH].

The eigenvalues of Ruu are the values, {λ0, ..., λM − 1} such that

Equation 9.12. 

Using (9.11), we can rewrite this as

Equation 9.13. 

Therefore the eigenvalues, vi, of ARssAH are

Equation 9.14. 

Since A is comprised of steering vectors which are linearly independent, it has full column rank, and the signal correlation matrix Rss is non-singular as long as the incident signals are not highly correlated.

A full column rank A and nonsingular Rss guarantees that, when the number of incident signals D is less than the number of array elements M, the M × M matrix ARssAH is positive semidefinite with rank D.

From elementary linear algebra, this implies that MD of the eigenvalues, vi, of ARssAH are zero. From (9.14), this means that MD of the eigenvalues of Ruu are equal to the noise variance, . We then sort the eigenvalues of Ruu such that λ0 is the largest eigenvalue, and λM − 1 is the smallest eigenvalue. Therefore,

Equation 9.15. 

In practice, however, when the autocorrelation matrix Ruu is estimated from a finite data sample, all the eigenvalues corresponding to the noise power will not be identical. Instead they will appear as a closely spaced cluster, with the variance of their spread decreasing as the number of samples used to obtain an estimate of Ruu is increased. Once the multiplicity, K, of the smallest eigenvalue is determined, an estimate of the number of signals, , can be obtained from the relation M = D + K. Therefore, the estimated number of signals is given by

Equation 9.16. 

The eigenvector associated with a particular eigenvalue, λi, is the vector qi such that

Equation 9.17. 

For eigenvectors associated with the MD smallest eigenvalues, we have

Equation 9.18. 

Equation 9.19. 

Since A has full rank and Rss is nonsingular, this implies that

Equation 9.20. 

or

Equation 9.21. 

This means that the eigenvectors associated with the MD smallest eigenvalues are orthogonal to the D steering vectors that make up A.

Equation 9.22. 

This is the essential observation of the MUSIC approach. It means that one can estimate the steering vectors associated with the received signals by finding the steering vectors which are most nearly orthogonal to the eigenvectors associated with the eigenvalues of Ruu that are approximately equal to .

This analysis shows that the eigenvectors of the covariance matrix Ruu belong to either of the two orthogonal subspaces, called the principal eigen subspace (signal subspace) and the non-principal eigen subspace (noise subspace). The steering vectors corresponding to the Directions-Of-Arrival lie in the signal subspace and are hence orthogonal to the noise subspace. By searching through all possible array steering vectors to find those which are perpendicular to the space spanned by the non-principal eigenvectors, the DOAs, φi’s can be determined.

To search the noise subspace, we form a matrix containing the noise eigenvectors:

Equation 9.23. 

Since the steering vectors corresponding to signal components are orthogonal to the noise subspace eigenvectors, for φ corresponding to the DOA of a multipath component. Then the DOAs of the multiple incident signals can be estimated by locating the peaks of a MUSIC spatial spectrum given by

Equation 9.24. 

or, alternatively,

Equation 9.25. 

Orthogonality between a(φ) and Vn will minimize the denominator and hence will give rise to peaks in the MUSIC spectrum defined in (9.24) and (9.25). The largest peaks in the MUSIC spectrum correspond to the directions of arrival of the signals impinging on the array.

Once the Directions-Of-Arrival, φi, are determined from the MUSIC spectrum, the signal covariance matrix Rss can be determined from the following relation [Sch86a].

Equation 9.26. 

From (9.26), the powers and cross correlations between the various input signals can be readily obtained.

The MUSIC algorithm may be summarized as follows:

  1. Collect input samples uk, k = 0, ..., K − 1, and estimate the input covariance matrix

    Equation 9.27. 

  2. Perform eigen decomposition on

    Equation 9.28. 

    where Λ = diag{λ0, λ1, . . ., λM − 1}, λ0 ≥ λ1 ≥ . . . ≥ λM − 1 are the eigenvalues and V = [q0 q1 . . . qM − 1] are the corresponding eigenvectors of .

  3. Estimate the number of signals , from the multiplicity K, of the smallest eigenvalue λmin as

    Equation 9.29. 

  4. Compute the MUSIC spectrum

    Equation 9.30. 

    where

  5. Find the largest peaks of MUSIC(φ) to obtain estimates of the Direction-Of-Arrival.

Figure 9-3 shows a comparison between the resolution performance of MUSIC and the Capon’s minimum variance method. As seen clearly from the plot, MUSIC can resolve closely spaced signals which cannot be detected by Capon’s method. Simulation results show that two signals arriving at angles 90 and 95 degrees, respectively, at the input of a 6-element uniformly spaced linear array can be detected by MUSIC, whereas Capon’s minimum variance method fails to differentiate between the two signals [Muh96b].

Comparison of MUSIC and Capon’s minimum variance method. Two signals of equal power at an SNR of 20 dB arrive at a 6-element uniformly spaced array with an interelement spacing equal to half a wavelength at angles 90 and 95 degrees, respectively [Muh96a].

Figure 9-3. Comparison of MUSIC and Capon’s minimum variance method. Two signals of equal power at an SNR of 20 dB arrive at a 6-element uniformly spaced array with an interelement spacing equal to half a wavelength at angles 90 and 95 degrees, respectively [Muh96a].

It should be noted that, unlike the conventional methods, the MUSIC spatial spectrum does not estimate the signal power associated with each arrival angle. Instead, when the ensemble average of the array input covariance matrix is known exactly, under uncorrelated and identical noise conditions, the peaks of PMUSIC(φ) are guaranteed to correspond to the true Directions-Of-Arrival. Since these peaks are distinct irrespective of the actual separation between arrival angles, in principle, with perfect array calibration, these estimators can distinguish and resolve arbitrarily closely spaced signals. When impinging signals sl(t) from (9.8) are highly correlated, MUSIC fails because Rss becomes singular. In Section 9.4.2, techniques are presented to handle highly correlated signals.

Improvements to the MUSIC Algorithm

Various modifications to the MUSIC algorithm have been proposed to increase its resolution performance and decrease the computational complexity. One such improvement is the Root-MUSIC algorithm developed by Barabell [Bar83], which is based on polynomial rooting and provides higher resolution, but is applicable only to a uniform spaced linear array. Another improvement proposed by Barabell uses the properties of signal space eigenvectors (principal eigenvectors) to define a rational spectrum function with improved resolution capability [Bar83].

Cyclic MUSIC, which exploits the spectral coherence properties of the signal to improve the performance of the conventional MUSIC algorithms, has been proposed in [Sch89]. Fast Subspace Decomposition techniques have also been studied to decrease the computational complexity of MUSIC [Xu94b].

Root-MUSIC Algorithm

For the case of a uniformly spaced linear array with interelement spacing d, the mth element of the steering vector a(φ) may be expressed as (see Chapter 3):

Equation 9.31. 

The MUSIC spectrum given by (9.24) is an all-pole function of the form

Equation 9.32. 

where . Using equation (9.31), the denominator of (9.32) may be written as

Equation 9.33. 

where Cmn is the entry in the mth row and nth column of C. Combining the two summations into one, (9.33) can be simplified as

Equation 9.34. 

where is the sum of the entries of C along the lth diagonal.

By defining a polynomial D(z) as follows,

Equation 9.35. 

evaluating the MUSIC spectrum PMUSIC(φ) becomes equivalent to evaluating the polynomial D(z) on the unit circle, and the peaks in the MUSIC spectrum exist because the roots of D(z) lie close to the unit circle. Ideally, with no noise, the poles will lie exactly on the unit circle at locations determined by the Direction-Of-Arrival. In other words, a pole of D(z) at z = z1 = |z1|exp(jarg(z1)) will result in a peak in the MUSIC spectrum at

Equation 9.36. 

Barabell [Bar83] showed through simulations that the ROOT-MUSIC algorithm has better resolution than the spectral MUSIC algorithm, especially at low SNR conditions.

Cyclic MUSIC Algorithm

Cyclic MUSIC is a signal selective direction finding algorithm which exploits the spectral coherence of the received signal as well as the spatial coherence. By exploiting spectral correlation along with MUSIC, it is possible to resolve signals spaced more closely than the resolution threshold of the array when only one of them is a Signal-of-Interest (SOI) [Sch89][Sch94]. Cyclic MUSIC also circumvents the requirement that the total number of signals impinging on the array (including both SOI and interference) must be less than the number of sensor elements [Gar88].

Consider an array of M sensors which receives Dα signals which exhibit spectral correlation at a cycle frequency α, and an arbitrary number of interferers that do not exhibit spectral correlation at that particular frequency. For example, this could be the case where a desired user, with a particular spectral correlation and number of multipath components, is to be detected in a heavy co-channel interference environment. Let si(t), i = 0,. . ., Dα − 1 be the desired signals, and n(t) the noise and interference vector incident on the array. The received signal vector u(t) can then be expressed as

Equation 9.37. 

Since only the desired signals exhibit spectral correlation at α, the cyclic autocorrelation matrix of the received signal u(t) defined as

Equation 9.38. 

can be expressed as

Equation 9.39. 

where is the cyclic autocorrelation matrix of the desired signals, defined as

Equation 9.40. 

where

Equation 9.41. 

Clearly, the matrix has rank Dα. For Dα < M, the null space of is spanned by the eigenvectors Vn, α corresponding to its zero eigenvalues,

Equation 9.42. 

If the signals are not fully correlated, has full rank equal to Dα. Since A is also full rank, it follows from (9.39) and (9.42) that the null space of is orthogonal to the direction vectors of the desired signals. That is,

Equation 9.43. 

Using (9.43) as a measure of orthogonality, a cyclic MUSIC spectrum similar to (9.25) can be defined as follows:

Equation 9.44. 

The Direction-Of-Arrival of the desired signals can be computed by searching through all φ for the Dα highest peaks of PCYCLIC-MUSIC(φ).

The ESPRIT Algorithm

The Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) algorithm is another subspace-based DOA estimation technique developed by Roy et. al. [Pau86b] [Roy89][Roy90]. ESPRIT dramatically reduces the computational and storage requirements of MUSIC and does not involve an exhaustive search through all possible steering vectors to estimate the Direction-Of-Arrival. Unlike MUSIC, ESPRIT does not require that the array manifold vectors be precisely known; hence, the array calibration requirements are not stringent. ESPRIT derives its advantages by requiring that the sensor array have a structure that can be decomposed into two equal-sized identical subarrays with the corresponding elements of the two subarrays displaced from each other by a fixed translational (not rotational) distance. That is, the array should possess a displacement (translational) invariance, and the sensors should occur in matched pairs with identical displacement. Fortunately, there are many practical situations where these conditions are satisfied, such as in the case of a uniform linear array.

Consider a planar array of arbitrary geometry composed of m = M/2 sensor pairs or doublets, as shown in Figure 9-4. (It should be noted that, although the array is assumed to be composed of M/2 sensor doublets in this example, it is possible to have M-element arrays such as the uniformly spaced linear array to be composed of M-1 overlapping doublets). To describe mathematically the effect of the translational invariance of the sensor array, it is convenient to describe the array as being composed of two identical subarrays, X0 and X1, physically displaced (not rotated) from each other by a known displacement (translational) Δx. The signals received at the ith doublet can then be expressed as

Equation 9.45. 

Illustration of ESPRIT array geometry [Roy90].

Figure 9-4. Illustration of ESPRIT array geometry [Roy90].

Equation 9.46. 

where φk is the Direction-Of-Arrival of the kth source relative to the direction of the translational Δx, and D is the number of signals incident on the array. Now, using matrix and vector notation, the received signal vector at the two subarrays can be written as follows:

Equation 9.47. 

Equation 9.48. 

where Φ is a D × D diagonal unitary matrix whose diagonal elements represent the phase delays between the doublet sensors for the D signals. The matrix Φ relates the measurements from subarray u0 to those from subarray u1, and is given by

Equation 9.49. 

Though in the complex field, the matrix Φ is a simple scaling operator, it is similar to the real two-dimensional rotation operator. The total array output vector u(t) can be written as

Equation 9.50. 

where

Equation 9.51. 

The basic idea behind ESPRIT is to exploit the rotational invariance of the underlying signal subspace induced by the translational invariance of the sensor array [Roy89]. The relevant signal subspace is the one that contains the outputs from the two subarrays u0 and u1. Simultaneous sampling of the output of the arrays leads to two sets of vectors, V0 and V1, that span the same signal subspace.

The signal subspace can be obtained from the knowledge of the input covariance matrix , the M-D smallest eigenvalues of Ruu are equal to . The D eigenvectors Vs corresponding to the D largest eigenvalues satisfy the relation,

Equation 9.52. 

Now, since Range {Vs} = Range{Ā}, there must exist a unique nonsingular T such that Vs = ĀT. Further, the invariance structure of the array allows the decomposition of Vs into V0CM × D and V1CM × D such that V0 = AT and V1 = AΦT. This implies that

Equation 9.53. 

Since V0 and V1 share a common column space, the rank of V01 = [V0 | V1] is D. This implies that there exists a unique rank-D matrix FC2D × D, such that

Equation 9.54. 

F spans the null space of V01. By defining , (9.54) can be rearranged to obtain

Equation 9.55. 

which implies

Equation 9.56. 

Now, assuming A to be full rank, which is true as long as the Directions-Of-Arrival of each signal is distinct, (9.56) implies that

Equation 9.57. 

From (9.57), it is evident that the eigenvalues of ψ must be equal to the diagonal elements of Φ, and the columns of T are the eigenvectors of ψ. This is the key relationship in the development of ESPRIT. The signal parameters are obtained as nonlinear functions of the eigenvalues of the operator ψ that maps (rotates) one set of vectors V0 that span an m-dimensional signal subspace into another set of vectors V1.

In practice, with only a finite number of noisy measurements available, the conditions in equations (9.52) and (9.53) are not satisfied. Hence finding a ψ such that ψ = is not possible. Therefore, one must resort to a Least Squares solution, which minimizes the residual error. Assuming that the set of equations is overdetermined, the Least Squares solution is given by

Equation 9.58. 

Once ψ is obtained, its eigenvalues which correspond to the diagonal elements of Φ can be easily computed. Since the diagonal elements of Φ are related to the Angle-Of-Arrival via equation (9.49), they can then be directly computed.

Since both and are equally noisy, the problem is better solved using the Total Least Squares criterion (TLS). This amounts to replacing the zero matrix in (9.54) by a matrix of errors whose Frobenius norm (i.e., total Least Squared error) is to be minimized. The TLS ESPRIT algorithm does this and may be summarized as follows:

  1. Obtain an estimate of Ruu from the measurements u.

  2. Perform eigen decomposition on , i.e.,

    Equation 9.59. 

    where Λ = diag{λ0,. . ., λM − 1} and V = [q0, . . ., qM − 1] are the eigenvalues and eigenvectors, respectively.

  3. Using the multiplicity, K, of the smallest eigenvalue λmin, estimate the number of signals , as

  4. Obtain the signal subspace estimate and decompose it into subarray matrices,

    Equation 9.60. 

  5. Compute the eigen decomposition

    Equation 9.61. 

    and partition V into submatrices,

    Equation 9.62. 

  6. Calculate the eigenvalues of ,

    Equation 9.63. 

  7. Estimate the Angle-Of-Arrival as

    Equation 9.64. 

As seen from the above discussion, ESPRIT eliminates the search procedure inherent in most DOA estimation methods. ESPRIT produces the DOA estimates directly in terms of the eigenvalues.

Maximum Likelihood Techniques

Maximum Likelihood (ML) techniques were some of the first techniques to be investigated for DOA estimation. Since ML techniques were computationally intensive, they were less popular than suboptimal subspace techniques. However, in terms of performance, the ML techniques are superior to the subspace based techniques, especially in low signal-to-noise ratio conditions or when the number of signal samples is small [Zis88]. Moreover, unlike subspace based techniques, ML based techniques can also perform well in correlated signal conditions.

To derive the ML estimator, data collected over a block of N snapshots is formulated as

Equation 9.65. 

where U = [u(0),...., u(N − 1)] is the array data input vector matrix of dimension M × N, A(Φ) = [a0),...., aD − 1)] is the spatial signature matrix of dimension M × D, S = [s(0),...., s(N − 1)] is the signal waveform matrix of dimension D × N, and N = [n(0),...., n(N − 1)] is the noise matrix of dimension M × N. In order to derive the ML estimate of the Angles-Of-Arrival φ0, ..., φD − 1 of the D sources, some assumptions are made about the signals and noise. First, it is assumed that the number of signals is known or estimated and is smaller than the number of sensors. Second, the set of D steering vectors is assumed to be linearly independent. The noise component is assumed to be an ergodic complex-valued Gaussian process of zero mean and covariance σ2I, where σ2 is an unknown scalar and I is the identity matrix. Finally, it is assumed that the noise samples are statistically independent. It should be noted that the ML estimator is meaningful even when the assumptions made about noise do not hold, in which case it coincides with the Least Squares estimator [Zis88].

The derivation of the ML estimator described here regards the signals to be sample functions of unknown deterministic sequences rather than random processes. Based on the assumptions made about the nature of noise, the joint probability density function of the sampled data as given by equation (9.65) can be expressed as [Zis88]

Equation 9.66. 

where det[ ] denotes the determinant. Ignoring the constant terms, the log likelihood function is given by

Equation 9.67. 

To compute the maximum likelihood estimator, the log likelihood function of (9.67) must be maximized with respect to the unknown parameters.

Because the logarithm is a monotonic function, maximizing (9.68) is equivalent to the following minimization problem:

Equation 9.68. 

Fixing Φ and minimizing with respect to S, yields the well-known Least Squares solution

Equation 9.69. 

Substituting (9.69) into (9.68), we obtain

Equation 9.70. 

where PA(Φ) is the projection operator which projects vectors onto the space spanned by the columns of A(Φ), and is given by

Equation 9.71. 

Therefore, the ML estimate of the Directions-Of-Arrival Φ = {φ1, . . ., φD − 1} is obtained by maximizing the log-likelihood function

Equation 9.72. 

Equation (9.72) can be interpreted in a geometric way such that the ML technique appears as a variant of the subspace based method. Viberg and Otterson [Vib91] presented a generalized framework to highlight the similarities between the various subspace based DOA estimation techniques and the maximum likelihood technique. In geometric terms, (9.72) the ML estimator is obtained by searching over the array manifold for the D steering vectors that form a D-dimensional signal subspace which is closest to the vectors {u(k), k = 0, ..., N − 1}, where closeness is measured by the modulus of the projection of the vectors onto this subspace.

It can be shown that equation (9.72) can be equivalently written as

Equation 9.73. 

where is the sample covariance matrix

Equation 9.74. 

The maximization of the log-likelihood function in (9.73) is a nonlinear, multidimensional maximization problem which is computationally very intensive. Many computationally efficient algorithms have been developed to simplify the solution to the maximization problem [Fed88] [Zis88] [Li93].

The Alternating Projection Algorithm developed by Ziskind and Wax [Zis88] is an iterative technique which reduces the maximization problem from a multi-dimensional problem to a one-dimensional problem. The idea is to perform the maximization with respect to a single parameter while holding the remaining parameters fixed. That is, the value of φi at the (n+ 1)th iteration is obtained by solving the following one-dimensional maximization problem:

Equation 9.75. 

where denotes the vector comprising of the Direction-Of-Arrival angle estimates of all signals other than the one being computed.

Equation 9.76. 

Since the log-likelihood function J(Φ) may have multiple local maximas, proper initialization is critical for global convergence. The initialization procedure suggested by Ziskind and Wax begins by solving the maximization problem for a single source:

Equation 9.77. 

Using the estimated φ0, φ1 is calculated as,

Equation 9.78. 

Continuing in this fashion, is computed. After proper initialization, the alternating projection algorithm can be used to maximize the log-likelihood function.

Further reduction in computational complexity can be achieved by taking advantage of the properties of the projection matrix [Zis88]. It can be shown that, by using the properties of the projection matrix, (9.75) can be written as

Equation 9.79. 

where

Equation 9.80. 

By defining a unit vector,

Equation 9.81. 

equation (9.79) can be rewritten as

Equation 9.82. 

The Alternating Projection based maximum likelihood estimator algorithm may be summarized as follows:

  1. Initialization

    Equation 9.83. 

  2. Main Loop

    Equation 9.84. 

DOA Estimation under Coherent Signal Conditions

As mentioned in the Section 9.2.1, the MUSIC algorithm works on the premise that the signals impinging on the array are not fully correlated or coherent. Only under uncorrelated conditions does the source covariance matrix Rss satisfy the full rank condition which is the basis of the MUSIC eigen decomposition. The performance of MUSIC degrades severely in a coherent or highly correlated signal environment as encountered in multipath propagation where multiple versions of the same signal arrive within the resolvable chip or symbol duration. Many modifications to the MUSIC algorithm have been proposed to make it work in the presence of coherent signals. Many of these techniques involve modification of the covariance matrix through a preprocessing scheme called spatial smoothing. One method of spatial smoothing proposed by Evans et al [Eva82] and further expanded by Shan et al [Sha85] is based on averaging the covariance matrix of identical overlapping arrays. This method requires an array of identical elements built with some form of periodic structure, such as the uniformly spaced linear array. An adaptive spatial smoothing technique was proposed by Takao and Kikuma [Tak87], which is useful for interference cancellation in multipath environments. Another form of spatial smoothing proposed by Haber and Zoltowski [Hab86] involves moving the entire array structure during the time interval in which the covariances are estimated. A similar technique based on moving the array was proposed by Li and Compton [Li94]. Spatial smoothing techniques always impose restrictions on the type and structure of the array. For the general case, coherent signal detection involves employing a multidimensional search through all possible linear combination of steering vectors to find those orthogonal to the noise subspace [Zol86].

Spatial Smoothing Techniques

The idea behind the spatial smoothing scheme proposed by Evans et al, [Eva82] is to let a linear uniform array with M identical sensors be divided into overlapping forward subarrays of size p, such that the sensor elements {0, ..., p − 1} form the first forward subarray and sensors {1, ..., p} form the second forward subarray, etc. Let uk(t) denote the vector of the received signals at the kth forward subarray. Based on the notation of equation (9.9) we can model the signals received at each subarray as

Equation 9.85. 

where F(k) denotes the kth power of the diagonal matrix

Equation 9.86. 

The covariance matrix of the kth forward subarray is therefore given by

Equation 9.87. 

where Rss is the covariance matrix of the sources.

Based on the above, the forward averaged spatially smoothed covariance matrix Rf is defined as the sample mean of the subarray covariance matrices:

Equation 9.88. 

where L=M-p+1 is the number of subarrays. Now, substituting (9.87) in (9.88), we obtain

Equation 9.89. 

where Rfss is the modified covariance matrix of the signals, given by

Equation 9.90. 

For LD, the covariance matrix Rfss will be nonsingular regardless of the coherence of the signals [Pil89a].

The price paid for detection of coherent signals using forward averaging spatial smoothing is the reduction in the array aperture. An M element array can detect only M/2 coherent signals using MUSIC with forward averaging spatial smoothing as opposed to M − 1 noncoherent signals that can be detected by conventional MUSIC.

Pillar and Kwon [Pil89a] proved that by making use of a set of forward and conjugate backward subarrays simultaneously, it is possible to detect up to 2M/3 coherent signals. In this scheme, in addition to splitting the array into overlapping forward subarrays, it is also split into overlapping backward arrays such that the first backward subarray is formed using elements {M, M − 1, ..., M − p + 1}, the second subarray is formed using elements {M − 1 , M − 2 , ..., Mp}, and so on.

Similar to (9.85), the complex conjugate of the received signal vector at the kth backward subarray can be expressed as

Equation 9.91. 

where F is defined in (9.86). The covariance matrix of the kth backward subarray is therefore given by

Equation 9.92. 

where

Equation 9.93. 

Now the spatially smoothed backward subarray matrix Rb can be defined as

Equation 9.94. 

It can be shown that the backward spatially smoothed covariance matrix Rb will be of full rank as long as is non-singular, and the non-singularity of is guaranteed whenever LD [Pil89a].

Now the forward/conjugate backward smoothed covariance matrix is defined as the mean of Rf and Rb, i.e,

Equation 9.95. 

Using an M element array, applying MUSIC on , it is possible to detect up to 2M/3 coherent signals [Pil89a].

Figure 9-5 shows a comparison between conventional MUSIC and MUSIC with forward/backward spatial smoothing in a coherent multipath signal environment. Simulations with three coherent signals impinging on a 6-element uniform linear array at 60, 90, and 120 degrees show that MUSIC fails almost completely, whereas with a spatial smoothing preprocessing scheme, all of the three multipath signals are detected clearly.

Comparison of MUSIC with and without forward/backward averaging in coherent multipath. Three coherent signals of equal power with SNRs of 20 dB arrive at a 6-element uniformly spaced array with an interelement spacing equal to half a wavelength at angles 60, 90, and 120 degrees, respectively [Muh96a].

Figure 9-5. Comparison of MUSIC with and without forward/backward averaging in coherent multipath. Three coherent signals of equal power with SNRs of 20 dB arrive at a 6-element uniformly spaced array with an interelement spacing equal to half a wavelength at angles 60, 90, and 120 degrees, respectively [Muh96a].

Multidimensional MUSIC

It was stated in Section 9.2.1 that in the presence of coherent signals, the signal correlation matrix Rss becomes singular, and hence violates the premise on which the MUSIC derivation is based. However, if all of the coherent signals (typically all multipath components associated with a single source within a resolvable chip) are grouped together as a single signal, the signal correlation matrix can retain its full rank. However, now the direction vector matrix A will not consist of steering vectors corresponding to distinct Directions-Of-Arrival. Instead the columns of A will consist of spatial signatures associated with each source (group of coherent signals). Essentially, we can have Rss as full rank by applying the data model of (8.5) in Chapter 8. Now the column vectors of A are linear combinations of one or more steering vectors corresponding to one or more Directions-Of-Arrival. Once the full rank status of Rss is maintained, the MUSIC algorithm is valid, and the signal subspace is spanned by the spatial signature vectors

Equation 9.96. 

which are orthogonal to the noise subspace. Computing the MUSIC spectrum now involves searching through all possible spatial signature vectors to find peaks in the spectrum. Since spatial signatures are linear combinations of steering vectors, this essentially involves a search in an Nmp dimensional space, where Nmp is the number of components associated with a single source (group). The multi-dimensional MUSIC spectrum is given by

Equation 9.97. 

where the vector c is defined as

Equation 9.98. 

where the values {ci} weight one steering vector relative to another.

As clearly seen from equation (9.97), as the number of multipath components increases, the complexity of the multidimensional search increases exponentially. The computational complexity of MD-MUSIC makes real-time implementation extremely difficult for more than two dimensions.

The Iterative Least Squares Projection Based CMA

The Iterative Least Squares Projection Based CMA (ILSP-CMA) is a property restoral based algorithm which can be used to jointly detect the spatial signatures and the waveforms associated with multiple sources incident on a receiver array. The ILSP-CMA is a data-efficient and cost-efficient technique which overcomes many of the problems associated with the Multi-target CMA and Multi-stage CMA algorithms [Par95].

Consider an M-element array with signals from D sources incident on it. Over a block of N snapshots, the array output can be expressed as

Equation 9.99. 

where U = [u(0),...., u(N − 1)] is the array input data matrix of dimension M × N, A = [a0),...., aD − 1)] is the spatial signature matrix of dimension M × D, S = [s(0),...., s(N − 1)] is the signal waveform matrix of dimension D × N, and N = [n(0),...., n(N − 1)] and is the noise matrix of dimension M × N. Given this block formulation, the ILSP-CMA algorithm provides a means of jointly estimating the spatial signature matrix A and the signal waveforms S, given N snapshots of the array input data matrix U.

By modeling the unknown signal waveforms as deterministic quantities to be estimated, and assuming that the number of signals is known or has been estimated, the log-likelihood function of the array output data is given by [Tal94]

Equation 9.100. 

where is the noise power. The maximum-likelihood (ML) estimator maximizes J with respect to the unknown spatial signatures A and s(k), k = 0, ..., N − 1 to yield the following minimization problem [Tal94]:

Equation 9.101. 

where is the squared Frobenius norm, and the elements of S are constrained to have a constant modulus. This is a nonlinear separable optimization problem which can be solved in two steps [Gol73].

The ILSP-CMA is an efficient algorithm to solve the minimization problem of (9.101) [Par95]. Let

Equation 9.102. 

be a function of continuous matrix variables A and S. Given an initial estimate of  of A, the minimization of f(Â, S) with respect to continuous S is a Least Squares problem. This is a separable problem in S, and the estimate of S is given by the Least Squares solution of (9.102), which is given by,

Equation 9.103. 

Each element of the solution Ŝ is then divided by its absolute value to make each signal unit modulus. That is, each signal is projected onto the unit circle. A better estimate of A is then obtained by minimizing f(A, Ŝ ) with respect to A, keeping Ŝ fixed. This is again a Least Squares problem and the new estimate of A is given by the Least Squares solution,

Equation 9.104. 

Since the first element in each unknown spatial signature is a real value, it can be made equal to unity by dividing the elements of the spatial signature vectors by their first element. This process is continued until  converges.

The ILSP-CMA algorithm may be summarized as follows:

  1. Given A0 (start with a random A0), n = 0

  2. n = n + 1

    1. Project all elements of [Sn] to the closest values on the unit circle (hard limit)

  3. Divide each column of An by its first element value.

  4. Repeat steps 2 and 3 until An and An-1 are close enough.

The Integrated Approach to DOA Estimation

In Section 9.5 it was shown how to obtain an estimate of multiple signals and their spatial signatures using the ILSP-CMA algorithm. If a signal has only one component, its spatial signature is identical to the steering vector corresponding to its Direction-Of-Arrival. Therefore, if we have an estimate of the spatial signature of a signal with a single component, we can estimate its Direction-Of-Arrival by observing its spatial signature. The Direction-Of-Arrival can be estimated by searching through all possible steering vectors and determining the one closest in two norms to the estimated spatial signature. Mathematically, the Direction-Of-Arrival φ is given by

Equation 9.105. 

where a(φ) is the steering vector corresponding to Direction-Of-Arrival φ, and ass is the estimated spatial signature.

Xu and Liu proposed a novel technique to estimate the Directions-Of-Arrival of the direct and multipath components of a signal from the spatial signature [Xu95]. In subspace based algorithms, in order to determine the Directions-Of-Arrival, a covariance matrix whose signal subspace is the span of A must be constructed. If an estimate of the spatial signature matrix A is available, we can form a spatial signature covariance matrix Raa = AAH on which the eigen decomposition may equivalently be performed to obtain the Directions-Of-Arrival. In the presence of coherent signals, forward/backward averaging is required to form a spatially smoothed spatial signature covariance matrix before eigen decomposition.

When the spatial signature estimate of a source exists, Xu and Liu proposed a technique to estimate the Directions-Of-Arrival of the various components of the signal [Xu95]. By applying the standard forward/backward spatial smoothing techniques discussed in Section 9.4.1 [Pil89a] to the spatial signature vector ass, a smoothed spatial signature covariance matrix can be formed as

Equation 9.106. 

where J is the permutation matrix with all zeros except ones in the anti-diagonal elements, K is the smoothing factor (number of subarrays), and

Equation 9.107. 

Subspace based algorithms such as MUSIC and ESPRIT can now be applied to this smoothed spatial signature covariance matrix, and up to 2M2/3 DOAs from coherent sources can be estimated. Note that by using ILSP-CMA to estimate M spatial signatures, and applying subspace techniques such as MUSIC or ESPRIT to the smoothed spatial signature covariance matrices of each spatial signature, it is possible to estimate up to 2M2/3 DOA’s using an M-element array. In a situation where there are multiple cochannel users with each user having multiple components, this technique can determine the Directions-Of-Arrival of multiple components of multiple users and associate each component with the correct user [Muh96b].

Computer simulations were run to study the performance of ILSP-CMA for spatial signature estimation along with MUSIC with forward and conjugate backward averaging. Figure 9-6 shows the estimated MUSIC spectrum for the case of a uniformly spaced linear array with six elements with an interelement spacing of a half wavelength. For spatial smoothing the array was divided into two overlapping 5-element subarrays. Six uncorrelated narrowband signals, each with a direct and three multipath components, are incident on the array. The multipath components are 10 dB below the direct component and the direct signal-to-noise ratio was 20 dB. The first signal had components arriving at 80, 110, 140, and 170 degrees, the second signal had components at 20, 90, 130, and 150 degrees, the third signal had components at 30, 60, 90, and 120 degrees, the fourth signal had components at 10, 40, 70, and 100 degrees, the fifth signal had components at 45, 75, 105, and 135 degrees, and the sixth signal had components at 10, 60, 100, and 110 degrees. Figure 9-6 shows that ILSP-CMA, along with forward and conjugate backward averaged MUSIC, is able to resolve a total of 24 signal components [Muh96b].

Example of spatial spectrum estimated using ILSP-CMA for spatial signature estimation, followed by MUSIC with forward/backward averaging. The six element array is able to resolve 24 direct and multipath components and associate each component to the appropriate signal (user) [Muh96a].

Figure 9-6. Example of spatial spectrum estimated using ILSP-CMA for spatial signature estimation, followed by MUSIC with forward/backward averaging. The six element array is able to resolve 24 direct and multipath components and associate each component to the appropriate signal (user) [Muh96a].

Detection of Number of Sources in Eigen Decomposition

Many of the DOA estimation algorithms described earlier require that the number of sources be known or estimated. Detection of the number of sources impinging on the array is a key step in most of the superresolution DOA estimation techniques. In the eigen decomposition based techniques, an estimate of the number of sources is obtained from an estimate of the number of repeated smallest eigenvalues. Since in practice the input sample covariance matrix is formed using a finite set of samples, the smallest eigenvalues are not exactly equal. Various statistical methods have been proposed to test for the equality or closeness of eigenvalues, which can be used to estimate the number of sources.

The SH, MDL and AIC Criteria

Anderson [And63] showed that a useful statistic for testing the closeness of eigenvalues is

Equation 9.108. 

where is the estimated kth eigenvalue, d is the hypothesized estimate of the number of signals, M is the number of elements in the array, and N is the size of the sample data block. In (9.108), the closeness of the eigenvalues is measured as the ratio of their geometric mean to their arithmetic mean. By setting up a subjective threshold γd, a sequential hypothesis (SH) test can be performed and the first d such that L(d) < γ d can be taken as the estimate of the number of signals D.

While the above sequential hypothesis testing to determine the number of sources is computationally attractive, the need to set up a subjective threshold is a major disadvantage. Wax and Kailath [Sch93d] proposed two other detection schemes based on the application of the Akaike information theoretic criteria (AIC) [Sch93d] and the Rissanen Minimum Descriptive Length (MDL) criteria [Ris78]. These methods do not require a subjective threshold, and the number of sources is determined as the value for which the AIC or MDL criteria is minimized.

In the AIC-based approach, the number of signals , is determined as the value of d ∈ {0, 1, . . . M − 1}] which minimizes the following criterion

Equation 9.109. 

where λi are the eigenvalues of the sample covariance matrix , N is the number of snapshots used to compute , and M is the number of elements in the array. The first term in equation (9.109) is derived directly from the log-likelihood function, and the second term is the penalty factor added by the AIC criterion.

In the MDL-based approach, the number of signals is determined as the argument which minimizes the following criterion:

Equation 9.110. 

Here again, the first term is derived directly from the log-likelihood function, and the second term is the penalty factor added by the MDL criterion. Wax and Kailath [Wax85] showed through simulations that the MDL criterion yields a consistent estimate of the number of signals, and the AIC yields an inconsistent estimate that tends, asymptotically, to overestimate the number of signals.

Xu et al [Xu94a] showed that the SH, AIC and MDL procedures cannot be applied directly to situations where spatial smoothing preprocessing is involved. The spatial smoothing preprocessing operation complicates the source order detection process. Xu et al modified the AIC and MDL criterion so that they can be applied correctly to situations where spatial smoothing is performed. Essentially, this leads to a change in the penalty function associated with the AIC and MDL criterion. If the penalty functions (i.e., the second term in (9.109) and (9.110)) are denoted PAIC(d), and PMDL(d), respectively, they must be modified as shown in Table 9-1 for the various spatial smoothing cases.

Table 9-1. Modified Values of the Penalty function in AIC and MDL Detection Schemes

Spatial Smoothing Type

PAIC(d)

PMDL(d)

Forward only

d(2M-2d+1)

0.5d(2M-2d+1)logN

Forward/conjugate Backward

0.5d(2M-d+1)

0.25d(2M-d+1)logN

Order Estimation Using Transformed Gerschgorin Radii

Wu et al [Wu95] proposed a new technique for source order estimation based on the effective use of the Gerschgorin radii of a unitary transformed input covariance matrix. The Gerschgorin theorem on eigenvalues of a matrix provides a method for estimating the location of eigenvalues from the values of the matrix elements. For an M × M matrix A = {aij}, Gerschgorin proved that all of the eigenvalues of the matrix are contained in the union of M disks Oi, i = 1, ..., M. These disks are centered at aii, and have radii, called the Gerschgorin radii ri,, equal to the sum of the magnitudes of all elements of the ith row vector, excluding the ith element. That is,

Equation 9.111. 

In other words, the eigenvalues of a matrix are located within the Gerschgorin disks which represent the collection of points in the complex plane whose distance to aii is, at most, ri. That is, Oi represents the collection of complex numbers z with the property of

Equation 9.112. 

Wu et al observed that at low signal to noise ratio conditions, the eigenvalues of the input covariance matrix Ruu are spread across a large range and the Gerschgorin disks for the matrix tightly overlap. Through a proper unitary transformation which preserves the eigenvalues, the overlap in the Gerschgorin disks can be reduced and hence can be effectively used for source number detection. The idea is to rotate the covariance matrix so that its Gerschgorin disks can be formed into two distinct signal and noise constellations. The source collection with larger Gerschgorin radii will contain exactly M largest signal eigenvalues, and the noise collection with small Gerschgorin radii will contain the remaining noise eigenvalues. That is, a unitary transformation should be chosen so that the noise Gerschgorin disks of the transformed covariance matrix are small and as far away from the signal Gerschgorin disks as possible. Once this is achieved, it is relatively easy to estimate the source order by classification of disks.

In the proposed method, the covariance matrix is first partitioned as follows:

Equation 9.113. 

where R1 is the leading principal submatrix of Ruu. The reduced covariance matrix R1 can also be decomposed by its eigen-structure as

Equation 9.114. 

where U1 is the M − 1 × M − 1 unitary matrix formed by the eigenvectors of R1 as

Equation 9.115. 

and D1 is the diagonal matrix constructed from the corresponding eigenvalues as

Equation 9.116. 

where . If the eigenvalues of the original covariance matrix Ruu are denoted λ1, λ2, . . .,λD, . . .,λM, it can be shown that the eigenvalues of Ruu and R1 satisfy the interlacing property,

Equation 9.117. 

By defining a unitary transformation U

Equation 9.118. 

the transformed input covariance matrix becomes

Equation 9.119. 

which can be shown to be equal to [Wu95]

Equation 9.120. 

where

Equation 9.121. 

The Gerschgorin disks of the transformed covariance matrix Quu possess the Gerschgorin radii

Equation 9.122. 

By incorporating the Gerschgorin radii information into the log-likelihood function, Wu et al derived a new source order estimator function which was called the Gerschgorin Likelihood Estimator (GLE). The GLE function is given by [Wu95]

Equation 9.123. 

where N is the number of data snapshots.

By applying the AIC and MDL penalty factors to equation (9.123), two source order estimation functions can be obtained [Muh96a][Rap98]. They are called the Gerschgorin AIC (GAIC) and Gerschgorin MDL (GMDL) criteria respectively, and are given by

Equation 9.124. 

Equation 9.125. 

The number of sources is determined as the argument which minimizes these functions. It is shown in [Wu95] that the GAIC and GMDL estimators are more consistent than the simple AIC and MDL estimators. Further, since the GAIC and GMDL techniques involve the eigen decomposition of an M − 1 × M − 1 matrix as opposed to a M × M matrix required in AIC and MDL techniques, they are computationally more efficient. However, since they are based on a submatrix of Ruu, they can detect only up to M − 2 sources, as opposed to M − 1 sources that can be detected using the regular AIC and MDL criteria.

All the source estimators discussed so far are based on the assumptions of Gaussian and spatially white noise. Wu et al have also derived modified versions of GAIC and GMDL which they call MGAIC and MGMDL respectively, which yield consistent estimates under non-white noise conditions and when only a few snapshots of data are available [Wu95].

Summary

In this chapter, we described various techniques for estimating the Direction-Of-Arrival of radio signals impinging on an antenna array. A survey of the most promising and classic algorithms used for DOA estimation was presented. A discussion of the source order estimation algorithms was presented in Section 9.7. The final chapter of this text explores how these DOA techniques may be applied to the timely subject of position location.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.11.98