Appendix 22.1: Covariance Matrix Estimation

Consider the simplest case with two return series img for img and img for img where img. Truncation to S would imply that means be equal to their maximum likelihood estimates:

(22.1) equation

Here, img and img are the sample MLE for the truncated sample of size S. Similarly, the MLE covariance matrix based upon S is equal to:

(22.2) equation

where img are scalars. The parameters img and img are typically replaced with their MLE counterparts based on the full sample of size T.

Consider now the likelihood function for these series, which can be written as a combination of the information unique to img (that is, the observations in img not common to img) and the information common to both series, for example, the joint likelihood given by:

(22.3) equation

img

The second part of the likelihood is the distribution of joint information while the first part describes the contribution to the likelihood function from the information unique to r1,t. The truncated estimator ignores the first half of the likelihood function. Ordinarily, one would maximize this likelihood with respect to E and V, but unfortunately these two sets of parameters do not appear in both halves of the likelihood. Stambaugh uses a result from Anderson (1957) that rewrites the joint likelihood as the product of a marginal imgand a conditional density img. Assuming returns to be multivariate normal, we get the familiar result that the conditional mean of img (conditional on img) is equal to:

(22.4) equation

where img, is the regression coefficient of img on img. If img and img are positively correlated img and a return img is observed to be above its mean img, then img is adjusted downward from its unconditional value. (This is what statisticians mean by “regression toward the mean.”) Furthermore, the conditional covariance given by the multivariate normal is:

(22.5) equation

In the more general multivariate case, img is a vector of returns on img assets all with img observations and img is an img vector of returns on the shorter time series. In that case, img and img are mean return vectors of size img and img, respectively, so that E is an img vector, img is: img is img is img and img. Furthermore, img is now a img matrix of regression coefficients.

The objective is to derive estimates for these covariance matrices. To estimate the maximum likelihood estimators for the moments of the conditional density, regress img on img using S observations, saving the covariance matrix of the residuals as img. Likewise, estimate mean returns for img and img using T and S observations, respectively. Then, applying the results in (22.3) and (22.4), adjust the truncated mean for img given in (22.1) by conditioning on the information in img:

(22.6) equation

Therefore, if the two returns series are positively correlated and the mean of the longer series exceeds its truncated mean, the mean for the shorter series is adjusted upward. That is, the truncated mean is most likely biased downward. Likewise, the truncated covariance matrices are adjusted according to (see Stambaugh):

equation

thus

(7) img

img

img

It is easy to show that img is identical to img in equation (22.5). In equation (22.7), it is also true that the covariance between img and img is a linear rescaling of the covariation in img with the magnitude depending on the strength of their covariation. Moreover, revisions to img depend on the how much the covariation in the longer series changes over the time interval img.

Removing the Effects of Smoothing

Consider, for example, the appraised value P which is a moving average of current and past comps img:

(22.8) equation

img

Rewrite the weights to be geometrically declining such that img for some scalar value of img. Let a = 1−w0. Note as well that img. Substituting into (8) yields:

equation

Now, replace P with its natural logarithm, lag one period, and subtract the resulting expression from (22.8), yielding a moving average of returns:

(22.9) img

where img. Obviously, returns are a weighted average of past market rates of return and this is the source of the smoothing. Solving (22.9) for img gives us:

equation

(22.10) img

equation

We seek the unsmoothed component img, which is:

(22.11) equation

Thus, we use the observed smoothed returns to recover the market return. An estimator for img can be obtained from an OLS regression of observed img on its lagged value. This is a special case of an ARMA model for which the moving average component, under certain stationarity conditions, is invertible. An invertible infinite order moving average is equivalent to a first-order autoregressive (AR) model whose parameter is estimated using ordinary least squares. The series can then be unsmoothed and covariances estimated thereafter.

Notes

1. Much of this chapter appeared in Peterson and Grier 2006. Asset allocation in the presence of covariance misspecification. Financial Analysts Journal 62(4): 76–85.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.131.38.14