The integral in the numerator is then extended on its first column, the remainder of the matrix having a Vandermonde determinant. What remains is then an integral form of the g1 parameter. Hence the result.
□
The scenario where K ≥ 1 unfolds similarly. The final theorem can be found in [2]. The receiver operating characteristics (ROC) curve of the Neyman-Pearson test against that of the energy detector is provided in Figure 8.5 for N = 4, M = 8 and σ2 =3 dBm. We observed a significant performance gain in terms of detection rate incurred by the Neyman-Pearson test compared to the classical energy detector.
This completes this section on hypothesis testing. In the following section, we go beyond the hypothesis test and move to the question of parameter inference in a slightly more complex data plus noise model than above.
We consider a similar scenario as in the previous section, where now the K sources use different transmit powers P1,…,PK, which the receiver is entitled to infer from successive observations.
Consider K data sources which are transmitting data simultaneously. Transmitter k ∈ {1,…, K} has power Pk and has space dimension nk, e.g., is composed of nk antennas. We denote nΔ__∑Kk = 1nk
y(m) = K∑k = 1√PkHkx(m)k + σw(m).
Assuming the filter coefficients are constant over at least M consecutive sampling periods, by concatenating M successive signal realizations into Y = [y(1),…, y(M)] ∈ ℂN×M, we have
Y = K∑k = 1√PkHkXk + σW,
where Xk = [x(1)k,…x(M)k]∈ℂnk × M
(8.31) |
where P ∈ ℝn×n is diagonal with first n1 entries P1, subsequent n2 entries P2, etc. and last nK entries PK, H = [H1,…, HK] ∈ ℂN×n and X = [XT1,…XTK]T∈ℂn × M
Our objective is to infer the values of the powers P1,…, PK from the realization of a single random matrix Y. This is successively performed from different approaches in the following sections. We first consider the conventional approach that assumes n small, N much larger than n, and M much larger than N. This will lead to a simple although largely biased estimation algorithm. This algorithm will be improved using Stieltjes transform approaches in the same spirit as in Section 8.4.
Conventional Approach
The first approach assumes numerous sensors in order to have much diversity in the observation vectors, as well as an even larger number of observations, in order to create an averaging effect on the incoming random data. In this situation, let us rewrite (8.31) under the form
(8.32) |
We shall denote λ1 ≤ … ≤ λN the ordered eigenvalues of 1MYY†
Appending Y ∈ ℂN×M into the larger matrix Y̲ ∈ ℂ(N+n)×M
Y_ = (HP12σIN00) (XW),
we recognize that, conditioned on H,1MYY_†
T ≜ (HPH† + σ2IN000)
and the random matrix
(XW)
has independent (non-necessarily identically distributed) entries with zero mean and unit variance. The population covariance matrix T, whose upper left entries also form a matrix unitarily equivalent to a sample covariance matrix, clearly has an almost sure limit spectral distribution as N grows large for fixed or slowly growing n. Extending Theorem 8.2.9 and Theorem 8.3.3 to c = 0 and applying them twice (once for the population covariance matrix T and once for 1MYY_†
If σ2 is a priori known, a rather trivial estimator of Pk is then given by
1nk∑i∈Nk(λi − σ2),
where Nk{∑k − 1j = 1nj + 1,…,∑kj = 1nj}
This means in practice that PK is asymptotically well approximated by the averaged value of the nK largest eigenvalues of 1MYY†
ˆP∞k = 1nk∑i∈ Nk(λi − ˆσ2),
where
ˆσ2 = 1N − nN − n∑i = 1λi.
Incidentally, although not derived on purpose, the refined (n, N, M)-consistent estimator of Section 8.5.2 will appear not to depend on a prior knowledge of σ2. Note that the estimation of Pk only relies on nk contiguous eigenvalues of 1MYY†
The Stieltjes Transform Method
The Stieltjes transform approach relies heavily on the techniques from Mestre, established in [24] and introduced in Section 8.4. We provide hereafter only the main steps of the method. The details can be found in [23].
Limiting spectrum of BN. In this section, we prove the following result.
Theorem 8.5.3. Let BN = 1MYY†
(8.33) |
where mF̲(z) is the unique solution with positive imaginary part of the implicit equation in mF̲ ,
1mF_ = − σ2 + 1f − K∑k = 11ckPkPkf |
(8.34) |
in which we denoted f the value
f(1 − c)mF_ − czm2F_.
Proof. First remember that the matrix Y in (8.31) can be extended into the larger sample covariance matrix Y̲ ∈ ℂ(N+n)×M
Y_ = (HP12σIN00)(XW).
From Theorem 8.2.9, since H has independent entries with finite fourth order moments, we have that the e.s.d. of HPH† converges weakly and almost surely to a limit distribution G as N, n1,…, nK → ∞ with N/nk → ck > 0. For z ∈ ℂ+, the Stieltjes transform mG(z) of G is the unique solution with positive imaginary part of the equation in mG,
z = − 1mG + k∑k = 11ckPk1 + PkmG. |
(8.35) |
The almost sure convergence of the e.s.d. of HPH† ensures the almost sure convergence of the e.s.d. of the matrix (HPH†0 + σ2IN00)
mH(z) = c01 + c0mG(z − σ2) − 11 + c01z, |
(8.36) |
for z ∈ ℂ+, where we denoted by c0 the limit of the ratio N/n, i.e., c0 = (c − 11 + … + c − 1K) − 1
As a consequence, the sample covariance matrix 1MYY_†
z = − 1mF_ + 1c(1 + 1c0)∫t1 + tmF_dH(t) = − 1mF_ + 1 + 1c0cmF_(1 − 1mF_mH( − 1mF_)) |
(8.37) |
for all z ∈ ℂ+.
For z ∈ ℂ+, mF̲(z) ∈ ℂ+. Therefore −1/mF̲(z) ∈ ℂ+ and one can evaluate (8.36) at −1/mF̲(z). Combining (8.36) and (8.37), we then have
z = − 1c1cmF_(z)2mG( − 1mF_(z) − σ2) + (1c − 1)1mF_(z), |
(8.38) |
where, according to (8.35), mG(−1/mF(z) − σ2) satisfies
1mF_(z) = − σ2 + 1mG( − 1mF_(z) − σ2) − K∑k = 11ckPk1 + PkmG( − 1mF_(z) − σ2). |
(8.39) |
Together with (8.38), this is exactly (8.34), with f(z) = mG( − 1mF_(z) − σ2) = (1 − c)mF_(z) − czmF_(z)2
Since the eigenvalues of the matrices BN and B̲N only differ by M − N zeros, we also have that the Stieltjes transform mF(z) of the l.s.d. of BN satisfies
(8.40) |
This completes the proof of Theorem 8.5.3.
□
For further usage, notice here that (8.40) provides a simplified expression for mG (−1/mF̲(z) − σ2). Indeed we have,
mG( − 1/mF_(z) − σ2) = − zmF(z)mF_(z). |
(8.41) |
Therefore, the support of the (almost sure) l.s.d. F of BN can be evaluated as follows: for any z ∈ ℂ+, mF(z) is given by (8.33), in which mF̲(z) is solution of (8.34); the inverse Stieltjes transform formula (8.4) allows then to evaluate F from mF(z), for values of z spanning over the set {z = x + iy, x > 0} and y small.
Multisource power inference. In the following, we finally prove the main result of this section, which provides the G-estimator P̂1,…, P̂K of the transmit powers P1,…, PK.
Theorem 8.5.4. Let BN ∈ ℂN×N be defined as BN = 1MYY†
ˆPk − Pka.s.→0,
where the estimate P̂k is given by
• if M ≠ N,
ˆPk = NMnk(M − N)∑i∈Nk(ηi − μi),
• if M = N,
ˆPk = Nnk(N − n)∑i∈Nk(N∑j = 1ηi(λj − ηi)2) − 1,
in which Nk{∑k − 1j = 1nj + 1,…,∑kj = 1nj},η1≤…≤ηN
Remark 8.5.1. We immediately notice that, if N < n, the powers P1,…,Pl , with l the largest integer such that N − ∑Ki = lni<0
Proof. The approach pursued to prove Theorem 8.5.4 relies strongly on the original idea of [22], which was detailed for the case of sample covariance matrices in Section 8.4. From Cauchy’s integration formula,
Pk = ck12πi∮ck1ckωPk − ωdω = ck12πi∮ck1crωPr − ωdω |
(8.42) |
for any negatively oriented contour Ck ⊂ ℂ, such that Pk is contained in the surface described by the contour, while for every i ≠ k, Pi is outside this surface. The strategy is very similar to that used for the sample covariance matrix case in Section 8.4. It comes as follows: we first propose a convenient integration contour Ck which is parametrized by a functional of the Stieltjes transform mF (z) of the l.s.d. of BN. We proceed to a variable change in (8.42) to express Pk as a function of mF(z). We then evaluate the complex integral resulting from replacing the limiting mF(z) in (8.42) by its empirical counterpart ˆmF(z) = 1Ntr(BN − zIN) − 1
Similar to Section 8.4, it turns out that the clusters generated in the spectrum of BN can be mapped to one or many power values Pk. In what follows, we assume that the clusters are disjoint so that no holomorphicity problem arises. We can prove the following (given in detail in [23] and [28]). There exist x(l)kF
Consider any continuously differentiable complex path ΓF ,k with endpoints x(l)kF
Recall now that Pk was defined as
Pk = ck12πi∮ckK∑r = 11crωPr − ωdω.
With the variable change ω = − 1/mG (t), this becomes
Pk = ck12πi∮CG,kK∑r = 11cr − 11 + PrmG(t)m′G(t)mG(t)2dt = ck12πi∮CG,k(mG(t)[ − 1mG(t)K∑r = 11cr − 11 + PrmG(t)] + c0 − 1c0)m′G(t)mG(t)2dt.
From Equation (8.35), this simplifies into
Pk = ckc012πi∮CG,k(c0tmG(t) + c0 − 1)m′G(t)mG(t)2dt. |
(8.43) |
Using (8.38) and proceeding with the further change of variable t = −1/mF̲(z) − σ2, (8.43) becomes
Pk = ckc012πi∮CG,k(1 + σ2mF_(z))[ − 1zmF_(z) − m′F_(z)mF_(z)2 − m′F(z)mF(z)mF_(z)]dz. |
(8.44) |
This whole process of variable changes allows us to describe Pk as a function of mF (z), the Stieltjes transform of the almost sure limiting spectral distribution of BN, as N → ∞. It then remains to exhibit a relation between Pk and the empirical spectral distribution of BN for finite N. This is what the subsequent section is dedicated to.
Let us now define m̂F (z) and m̂F̲ (z) as the Stieltjes transforms of the empirical eigenvalue distributions of BN and B̲N, respectively, i.e.,
(8.45) |
and
ˆmF_(z) = NMˆmF(z) − M − NM1z.
Instead of going further with (8.44), define Pk, the “empirical counterpart” of Pk , as
The integrand can then be expanded into nine terms, for which residue calculus can easily be performed. Denote first η1,…, ηN the N real roots of m̂F(z) = 0 and (μ1, …, μN the N real roots of m̂F̲(z) = 0. We identify three sets of possible poles for the nine aforementioned terms: (i) the set {λ1,…λN}∩[x(l)kF,x(r)kF]
Now, we know from Theorem 8.5.3 that ˆmF(z) = a.s.→mF(z) and ˆmF_(z) = a.s.→mF_(z) as N → ∞. Observing that the integrand in (8.46) is uniformly bounded on the compact CF,k, the dominated convergence theorem, Theorem 16.4 of [16], ensures ˆmF(z) = a.s.→mF(z)1Ntr(BN − zIN) − 1ˆPka.s.→Pk.
To go further, we now need to determine which of λ1, …, λN, η1, …, ηN and (η1, …, ηN lie inside CF ,k. It can be proved, by extending Theorem 8.3.1 and Theorem 8.3.4 to the current model, that there will be no eigenvalue of BN (or B̲N) outside the support of F, and the number of eigenvalues inside cluster kF is exactly nk. Since CF,k encloses cluster kF and is away from the other clusters, {λ1,…λN}∩[x(l)kF,x(r)kF] = {λii∈Nk} almost surely, for all N large. Also, for any i ∈ {1, …, N}, it is easy to see from (8.45) that m̂F(z) → ∞ when z ↑ λi and m̂F(z) → −∞ when z ↓ λi. Therefore m̂F(z) → ∞ has at least one solution in each interval (λi−1, λi), with λ0 = 0, hence μ1 < λ1 < μ2 … < μN < λN. This implies that, if k0 is the index such that CF,k contains exactly λk0,…,λk0 + (nk − 1), then CF,k also contains {μk0 + 1,…,μk0 + (nk − 1)}. The same result holds for ηk0 + 1,…,ηk0 + (nk − 1). When the indexes exist, due to cluster separability, ηk0 − 1 and μk0 − 1 belong, for N large, to cluster kF − 1. We are then left with determining whether μk0 and ηk0 are asymptotically found inside CF,k.
For this, we use the same approach as in [22] by noticing that, since 0 is not included in Ck , one has
12πi∮Ck1ωdω = 0.
Performing the same changes of variables as previously, we have
∮CF,k − mF_(z)mF(z) − zm′F_(z)mF(z) − zmF_(z)m′F_(z)z2mF_(z)2mF(z)2dz = 0. |
(8.48) |
For N large, the dominated convergence theorem ensures again that the left-hand side of the (8.48) is close to
∮CF,k − ˆmF_(z)(z) − z^m′F_(z)(z) − zˆmF_(z)′(z)z2ˆmF_(z)2(z)2dz. |
(8.49) |
Residue calculus of (8.49) then leads to
∑ 1≤i≤Nλi∈[x(l)kF,x(r)kF]2 − ∑ 1≤i≤Nηi∈[x(l)kF,x(r)kF]1 − ∑ 1≤i≤Nμi∈[x(l)kF,x(r)kF]1a.s.→0. |
(8.50) |
Since the cardinalities of {i,ηi∈[x(l)kF,x(r)kF]} and i,μi∈[x(l)kF,x(r)kF] are at most nk, (8.50) is satisfied only if both cardinalities equal nk in the limit. As a consequence, μk0∈[x(l)kF,x(r)kF] and ηk0∈[x(l)kF,x(r)kF]. For N large, N ≠ M, this allows us to simplify (8.47) into
(8.51) |
with probability one. The same reasoning holds for M = N. This is our final relation. It now remains to show that the ηi and the μi are the eigenvalues of diag (λ) − 1N√λ√λT and diag (λ) − 1M√λ√λT, respectively. But this is merely a consequence of [23, Lemma 1].
This concludes the proof of Theorem 8.5.4.
□
We now evaluate the performance difference between the traditional and the Stieltjes transform inference methods, for K = 3 sources, P1 = 1/16, P2 = 1/4, N = 24 sensors, M = 128 samples, and n1 = n2 = n3 = 4. This is provided in Figure 8.6 which compares the distribution function of the estimates engendered by both methods. As anticipated, we observe a significant gain in terms of bias reduction for the Stieltjes transform method.
Random matrix theory for signal processing is a fast growing field of research whose interest is mainly motivated by the increase of the dimensionality and complexity of today’s systems. While the first years of random matrix theory were mainly focusing on Gaussian and invariant matrix distributions, the last ten years of research were mainly targeting large dimensional matrices with independent entries. This provided interesting results in particular on the limiting spectrum of sample covariance matrices, which led to new results on inverse problems for large dimensional systems. These results are often surprisingly simple and efficient as they perform well against exact maximum likelihood solutions, even for systems of not too large dimensions. Much more is however needed from a mathematical viewpoint relative in particular to second order statistics, see e.g., [35, 36], in order to evaluate theoretically the performance of these methods as well as a generalization to more intricate random matrix structures, such as Vandermonde matrices for array processing, see e.g., [37], or unitary random matrices, see e.g., [38]. A more exhaustive account of random matrix methods as well as more details on the methods presented here can be found in [10, 17, 28].
Exercise 8.7.1 (Sampling and Signal Energy). Based on Theorem 8.2.9, prove the Marc̆enko-Pastur law, Theorem 8.2.5.
Hint 8.7.1. Observe that the fixed-point equation in mFB reduces now to a second order polynomial from which mFB(z) takes an explicit form. The inverse Stieltjes transform formula 8.4 gives the expression of FB.
Exercise 8.7.2. Let XN ∈ ℂN×n be a random matrix with i.i.d. Gaussian entries of zero mean and variance 1/n. For RN ∈ ℂN×N and TN ∈ ℂn×n deterministic and of uniformly bounded spectral norm such that FRN⇒FR and FTN⇒FT, as N,n → ∞, determine an expression of the Stieltjes transform of the limiting eigenvalue distribution of BN=R12NXNTNX†NR12N as N/n → c.
Hint 8.7.2. Follow the proof of Theorem 8.2.9 by looking for a deterministic equivalent of 1Ntr(A(BN − zIN) − 1) for some deterministic A, taken to be successively RN and IN. A good choice of the matrix DN is DN = aNRN.
Exercise 8.7.3. Based on the definition of the Shannon-transform and on the G-estimator for the Stieltjes transform, determine a G-estimator for
VTN(x) = 1N log det(xTN + IN)
based on the observations
yk = T12Nxk
with xk ∈ ℂN with i.i.d. entries of zero mean and variance 1/N, independent across k, for k ∈ {1,…, n}.
Hint 8.7.3. Write the expression of VTN(x) as a function of the Stieltjes transform of TN and operate a variable change in the resulting integral using Theorem 8.2.9.
Exercise 8.7.4. From the result of Theorem 8.3.3, propose a hypothesis test for the presence of a signal transmitted by a signal source and observed by a large array of sensors, assuming that the additive noise variance is either perfectly known or not.
Hint 8.7.4. Observe that the ratio of the extreme eigenvalues in both H0 and H1 hypotheses is asymptotically independent of the noise variance.
Exercise 8.7.5. For W ∈ ℂN×n, n < N, the n columns of a random unitarily invariant unitary matrix, w a column vector of W, prove that, if BN is a random matrix with bounded spectral norm, function of all columns of W but w, then, as N, n → ∞ with n/N → c < 1,
w†BNw − 1N − ntr(()IN − WW†)BNa.s.→0.
Hint 8.7.5. write w as the normalized projection of a Gaussian vector x on the subspace orthogonal to the space spanned by the columns of W but w, i.e., w = Πx, with Π = IN − WW† + WW†.
[1] J. Wishart, “The generalized product moment distribution in samples from a normal multivariate population,” Biometrika, vol. 20, no. 1–2, pp. 32–52, Dec. 1928.
[2] R. Couillet and M. Debbah, “A Bayesian framework for collaborative multisource signal detection,” IEEE Transactions on Signal Processing, vol. 58, no. 10, pp. 5186–5195, Oct. 2010.
[3] P. Bianchi, J. Najim, M. Maida, and M. Debbah, “Performance of Some Eigenbased Hypothesis Tests for Collaborative Sensing,” Proceedings of the IEEE Workshop on Statistical Signal Processing, SSP’09, Sep., 2010.
[4] R. A. Fisher, “The sampling distribution of some statistics obtained from non-linear equations,” The Annals of Eugenics, vol. 9, pp. 238–249, 1939.
[5] M. A. Girshick, “On the sampling theory of roots of determinantal equations,” The Annals of Math. Statistics, vol. 10, pp. 203–204, 1939.
[6] P. L. Hsu, “On the distribution of roots of certain determinantal equations,” The Annals of Eugenics, vol. 9, pp. 250–258, 1939.
[7] S. Roy, “p-statistics or some generalizations in the analysis of variance appropriate to multi-variate problems,” Sankhya: The Indian Journal of Statistics, vol. 4, pp. 381–396, 1939.
[8] Harish–Chandra, “Differential operators on a semi-simple Lie algebra,” American Journal of Mathematics, vol. 79, pp. 87–120, 1957.
[9] C. Itzykson and J. B. Zuber, Quantum Field Theory. McGraw-Hill, 1980; Dover Publications, 2005, p. 705.
[10] Z. Bai and J. W. Silverstein, Spectral Analysis of Large Dimensional Random Matrices, Springer Series in Statistics, 2009.
[11] P. Billingsley, Convergence of Probability Measures. Hoboken, NJ: John Wiley & Sons, Inc., 1968.
[12] S. Wagner, R. Couillet, M. Debbah, and D. T. M. Slock, “Large System Analysis of Linear Precoding in MISO Broadcast Channels with Limited Feedback,” 2010. [Online]. Available:http://arxiv.org/abs/0906.3682
[13] V. A. Marc̆enko and L. A. Pastur, “Distributions of eigenvalues for some sets of random matrices,” Math USSR-Sbornik, vol. 1, no. 4, pp. 457–483, Apr. 1967.
[14] J. W. Silverstein and Z. D. Bai, “On the empirical distribution of eigenvalues of a class of large dimensional random matrices,” Journal of Multivariate Analysis, vol. 54, no. 2, pp. 175–192, 1995.
[15] Z. D. Bai and J. W. Silverstein, “No Eigenvalues Outside the Support of the Limiting Spectral Distribution of Large Dimensional Sample Covariance Matrices,” Annals of Probability, vol. 26, no. 1, pp. 316–345, Jan. 1998.
[16] P. Billingsley, Probability and Measure, 3rd ed. Hoboken, NJ: John Wiley & Sons, Inc., 1995.
[17] A. M. Tulino and S. Verdú, “Random matrix theory and wireless communications,” Foundations and Trends in Communications and Information Theory, vol. 1, no. 1, 2004.
[18] D. N. C. Tse and O. Zeitouni, “Linear multiuser receivers in random environments,” IEEE Transactions on Information Theory, vol. 46, no. 1, pp. 171–188, 2000.
[19] J. Baik and J. W. Silverstein, “Eigenvalues of large sample covariance matrices of spiked population models,” Journal of Multivariate Analysis, vol. 97, no. 6, pp. 1382–1408, 2006.
[20] Z. D. Bai and J. W. Silverstein, “Exact Separation of Eigenvalues of Large Dimensional Sample Covariance Matrices,” The Annals of Probability, vol. 27, no. 3, pp. 1536–1555, 1999.
[21] J. W. Silverstein and S. Choi, “Analysis of the limiting spectral distribution of large dimensional random matrices,” Journal of Multivariate Analysis, vol. 54, no. 2, pp. 295–309, 1995.
[22] X. Mestre, “On the asymptotic behavior of the sample estimates of eigenvalues and eigenvectors of covariance matrices,” IEEE Transactions on Signal Processing, vol. 56, no. 11, pp. 5353–5368, Nov. 2008.
[23] R. Couillet, J. W. Silverstein, and M. Debbah, “Eigen-Inference for Energy Estimation of Multiple Sources,” 2011. [Online]. Available:http://arxiv.org/abs/1001.3934
[24] X. Mestre, “Improved estimation of eigenvalues of covariance matrices and their associated subspaces using their sample estimates,” IEEE Transactions on Information Theory, vol. 54, no. 11, pp. 5113–5129, Nov. 2008.
[25] F. Hiai and D. Petz, The semicircle law, free random variables and entropy - Mathematical Surveys and Monographs No. 77. Providence, RI, USA: American Mathematical Society, 2006.
[26] Ø. Ryan and M. Debbah, “Free deconvolution for signal processing applications,” in (ISIT’07), Nice, France, June 2007, pp. 1846–1850.
[27] N. R. Rao and A. Edelman, “The polynomial method for random matrices,” Foundations of Computational Mathematics, vol. 8, no. 6, pp. 649–702, Dec. 2008.
[28] R. Couillet and M. Debbah, Random Matrix Methods for Wireless Communication., 1st ed. New York, NY: Cambridge University Press, 2011.
[29] P. Vallet, P. Loubaton, and X. Mestre, “Improved subspace DoA estimation methods with large arrays: The deterministic signals case,” in (ICASSP’09), 2009, pp. 2137–2140.
[30] W. Rudin, Real and Complex Analysis, 3rd ed. McGraw-Hill Series in Higher Mathematics, May 1986.
[31] D. Gregoratti and X. Mestre, “Random DS/CDMA for the amplify and forward relay channel,” IEEE Transactions on Wireless Communications, vol. 8, no. 2, pp. 1017–1027, 2009.
[32] R. Couillet and M. Debbah, “Free deconvolution for OFDM multicell SNR detection,” in (PIMRC’08), Cannes, France, 2008, pp. 1–5.
[33] I. S. Gradshteyn and I. M. Ryzhik, “Table of Integrals, Series and Products,” Academic Press, 6th edition, 2000.
[34] S. H. Simon, A. L. Moustakas, and L. Marinelli, “Capacity and character expansions: Moment generating function and other exact results for MIMO correlated channels,” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5336–5351, 2006.
[35] Z. D. Bai and J. W. Silverstein, “CLT of linear spectral statistics of large dimensional sample covariance matrices,” Annals of Probability, vol. 32, no. 1A, pp. 553–605, 2004.
[36] J. Yao, R. Couillet, J. Najim, E. Mouline, and M. Debbah, “CLT for eigeninference methods in cognitive radios,” in (ICASSP’11), Prague, Czech Republic, 2011, pp. 2980–2983.
[37] Ø. Ryan and M. Debbah, “Asymptotic behavior of random Vandermonde matrices with entries on the unit circle,” IEEE Transactions on Information Theory, vol. 55, no. 7, pp. 3115–3148, July 2009.
[38] R. Couillet, J. Hoydis, and M. Debbah, “Deterministic equivalents for the analysis of unitary precoded systems,” IEEE Transactions on Information Theory, 2011.
1We remind that a unitary matrix U ∈ ℂN×N is such that UU† = U‡U = IN.
2Throughout this work, we will respect the convention that x (be it a scalar or an Hermitian matrix) is non-negative if x ≥ 0, while x is positive if x ≥ 0.
3The Hermitian property is fundamental to ensure that all eigenvalues of XN belong to the real line. However, the extension of the empirical spectral distribution (e.s.d.) to non-Hermitian matrices is sometimes required; for a definition, see (1.2.2) of [10].
4We borrow here the notation m due to a large number of contributions from Bai, Silverstein et al. [14, 15]. In other works, the notation s or S for the Stieltjes transform is used.
5We recall that the support Supp (F) of a real function F is the set {x ∈ℝ, ∣F(x)∣ > 0}.
3.16.69.143