1.3. Multivariate Normal Distribution

A probability distribution that plays a pivotal role in multivariate analysis is multivariate normal distribution. We say that y has a multivariate normal distribution (with a mean vector μ and the variance-covariance matrix Σ) if its density is given by


In notation, we state this fact as y ~ Np(μ, Σ). Observe that the above density is a straightforward extension of the univariate normal density to which it will reduce when p = 1.

Important properties of the multivariate normal distribution include some of the following:

  • Let Ar × p be a fixed matrix, then Ay ~ Nr (Aμ, AΣA′) (rp). It may be added that Ay will admit the density if AΣA′ is nonsingular, which will happen if and only if all rows of A are linearly independent. Further, in principle, r can also be greater than p. However, in that case, the matrix AΣA′ will not be nonsingular. Consequently, the vector Ay will not admit a density function.

  • Let G be such that Σ−1 = GG′, then Gy ~ Np(Gμ, I) and G′(y - μ) ~ Np(0, I).

  • Any fixed linear combination of y1, ..., yp, say cy, cp×1o is also normally distributed. Specifically, cy ~ N1 (cμ, cΣc).

  • The subvectors y1 and y2 are also normally distributed, specifically, y1 ~ Np1 (μ1, Σ11) and y2 ~ Np-p1(μ2,Σ22).

  • Individual components y1, ..., yp are all normally distributed. That is, yi ~ N1 (μi, σii), i = 1, ..., p.

  • The conditional distribution of y1 given y2, written as y1|y2, is also normal. Specifically,


    Let μ1 + Σ12Σ22−1(y2 - μ2) = μ1 - Σ12Σ22−1μ2 + Σ12Σ22−1y2 = B0 + B1y2, and Σ11·2 = Σ11 - Σ12 Σ22−1 Σ21. The conditional expectation of y1 for given values of y2 or the regression function of y1 on y2 is B0+B1y2, which is linear in y2. This is a key fact for multivariate multiple linear regression modeling. The matrix Σ11·2 is usually represented by the variance-covariance matrix of error components in these models. An analogous result (and the interpretation) can be stated for the conditional distribution of y2 given y1.

  • Let δ be a fixed p × 1 vector, then


    The random components y1, ..., yp are all independent if and only if Σ is a diagonal matrix; that is, when all the covariances (or correlations) are zero.

  • Let u1 and u2 be respectively distributed as Np (μu1, Σu1) and Np(μu2, Σu2), then

    u1 ± u2 ~ Np(μu1 ± μu2, Σu1 + Σu2 ± (cov(u1, u2) + cov(u2, u1))).

    Note that if u1 and u2 were independent, the last two covariance terms would drop out.

There is a vast amount of literature available on multivariate normal distribution, its properties, and the evaluations of multivariate normal probabilities. See Kshirsagar (1972), Rao (1973), and Tong (1990) among many others for further details.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.155.100