Appendix 3.A Influence of Measurement Errors

Measurement errors are known to behave nonrandomly, randomly, or both. When the nonrandom component of the error in a variable is the same for all respondents, it affects the central tendency or the mean of the response distribution, but not the relation of the variable with other variables. However, it is difficult to deal with nonrandom errors that vary across individuals. Random errors, on the other hand, increase unexplainable variation, and can obscure potential relations among variables (Alwin, 1989; Alwin and Krosnick, 1991). SEM typically assumes random measurement errors. Here we briefly review the effect of random measurement error on regression analysis. Appendix 2.B shows that reliability is defined as the extent to which the variance of an observed variable is explained by the true scores that the variable is supposed to measure:

(3.15) equation

where Var (img) and Var (x) are the variances of the random measurement error img and the observed variable x, respectively. Reliability that is less than 1.0 indicates the existence of measurement error. However, imperfect reliability or measurement error in dependent and independent variables has different effects in linear regression analysis (Werts et al., 1976). Measurement error in a dependent variable does not bias the unstandardized regression coefficients because the measurement error is absorbed into the disturbance term; but it will bias the standardized regression coefficients because the weights of the standardized regression coefficients are a function of the standardized deviations of both the dependent and independent variables. Measurement errors in independent variables are problematic in regression analysis. In a regression model, measurement error in an independent variable would bias the least-square estimate of the slope coefficient downwards. The magnitude of the bias depends on the reliability of the variable with low reliability causing greater bias in the regression coefficient. Let us use a simple regression img as an example, assuming img and img, where img and img are the true scores of y and x, respectively, and the measurement errors img and img are independent of each other. The covariance between x and y is:

(3.16) equation

and the regression slope coefficient b is equal to

(3.17) equation

where img is the regression slope coefficient of the ‘true’ independent variable img on the ‘true’ dependent variable img, and img is the attenuation factor that is the reliability for x. When img is perfect (i.e., img), img; otherwise, b is attenuated downward.

If two or more independent variables in a multiple linear regression have measurement errors, the effects of the measurement errors on estimation of the regression coefficients are complicated. A coefficient may be biased either downward or upward, and the signs of the coefficients may even be reversed (Cochran, 1968; Bohrnstedt and Carter, 1971; Kenny, 1979; Armstrong, 1990). Allison and Hauser (1991, p. 466) argued that the bias depends ‘in complex ways on the true coefficients, the degrees of measurement error in each variable, and the pattern of inter-correlations among the independent variables.’

Notes

1. LISREL notations are used in the equations to specify the slope coefficients (imgs) of regressing latent variables on the exogenous covariates. However, those regression coefficients (imgs) are all specified in the BETA matrix in Mplus TECH1 output.

2. For the purpose of model identification, the first indicator y1 of SOM, y5 of DEP, and y3 of ANX are not regressed on the covariates (Muthén, 1989; Kaplan, 2000).

3. Inclusion of reciprocal effect makes the model nonrecursive, thus more difficult to model. Readers who are interested in nonrecursive SEM are referred to Berry (1984) and Bollen (1989a).

4. Item reliabilities can be estimated using test–retest information or estimated from multiple wave panel data (Heise, 1969; Heise and Bohrnstedt, 1970; Wiley and Wiley, 1970; Werts and Jöreskog, 1971; Palmquist and Green, 1992; Wang et al., 1995).

5. This approach is also applicable to composite measures (e.g., y1 is a sum of values of a set of observed indicators). The estimate of the composite measure's Cronbach alpha can be used as the reliability of the measure (Cohen et al., 1990).

6. Alternatively, interactions in SEM can be tested using multi-group modeling, in which the same model is specified and estimated simultaneously in each of the groups (e.g., treatment vs. control groups). This approach allows the capture of all the interactions between groups and independent variables, including latent variables. This topic will be discussed in Chapter 5.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.29.47