Dealing Practically with Multi-Item Scales

Multi-Item Scales: A Brief Reminder

As discussed in Chapter 4, when gathering data on a construct, researchers sometimes seek to use multiple measurements (multi-item scales) instead of single measures. This is desirable because it allows measurement of various aspects of a construct, and also allows the researcher to test whether the construct seems reliably to exist across various measures. In the text case, we have used multiple survey measurements of both satisfaction and trust to attempt to get to a final Satisfaction and Trust measure.
As introduced in Chapter 4, there are various practical steps to such analyses, as discussed next.

Step 1: Deal with Reversed Items

Let us say that the four customer satisfaction scale items are the following survey questions. (Note that I created these for the example; don’t use them in any practical exercises):
  • Satisfaction01: We are satisfied with Accu-Phi's accounting software
  • Satisfaction02: Accu-Phi's software fulfills our accounting needs
  • Satisfaction03: I have been happy with the functionality of Accu-Phi software
  • Satisfaction04: We have not been satisfied with Accu-Phi’s software
Can you see that the first three address a state of positive satisfaction whereas the fourth assesses lack of satisfaction? Satisfaction04 is a reverse question; it runs in an opposite direction to the others.
In order to deal with these four items as one scale, the data itself must be reversed in this situation: a “7” on Satisfaction04 means highest dissatisfaction. Such a data point must be reversed to be a “1” so that it agrees with Satisfaction01- Satisfaction03.
You can do the reversing in the original column or create a new one. I suggest the latter, so that we don’t throw anything away. How do you reverse the data in an item like this?
  1. Calculate the maximum possible answer on the original scale in the reverse item (for instance, our Satisfaction04 is on a 1-7 scale system, so 7 is the maximum).
  2. If the scale starts at 0, the new variable will be the maximum minus the original score (e.g. if the scale was 0-100 then a score of 100 would become 100-100 = 0, a score of 10 would become 100-10 = 90, and so on).
  3. If the original scale starts at 1 (like Satisfaction04):
    1. Add 1 to the original maximum (e.g. for Satisfaction04 the maximum + 1 = 7 +1 = 8)
    2. Each data point in the new variable should be (maximum + 1) – (original data point in the reverse item).
Say that for Satisfaction04 someone put a “6”. This indicates high dissatisfaction. In the new reversed variable, the data point would become (maximum + 1) – (original data point in the reverse item) = (7 +1) – (6) = 8 – 6 = 2. Similarly if the person put a “3” the new data point would be 8-3 = 5. This method therefore reverses the data.
Chapter 6 has already demonstrated how to create a reversed scale item in SAS. You will see there that for the chapter example I have created a new version of Satisfaction04, called “Rev_ Satisfaction04.”

Step 2: Assess Internal Reliability of each Multi-Item Scale

Introduction

Internal reliability asks whether the answers to various items in a multi-item scale tend to be consistent. An initial example is whether people answer the scale items in a survey relating to the same issue – say satisfaction – in the same way. If we are not talking about people answering questions, we may have other sources of multi-item data relating to the same construct (such as multiple pieces of financial data that indicate the financial health of organizations) and wish to know if they give consistent results.
For example, going back to the textbook case study, we would be interested to know if each respondent answered all the trust questions in roughly the same way (consistently high, low, medium, etc.). If so, the items are consistent with each other, and we can be reasonably sure that they all relate to the same thing (i.e., the same variable). However, say that you think you have four survey items relating to trust, but most people answer the first two items high and they answer the second two items low. Respondents do not then react to these items consistently, and accordingly it seems that the four survey items do not relate to the same underlying trust construct. Perhaps you have two sub-dimensions of trust instead, in this case.
How do you measure internal reliability? There are several measures, but probably the most common is the Cronbach alpha.

Internal Reliability: Measurement through the Cronbach Alpha

You can assess the internal consistency of a single set of multi-item scale answers using a statistic called the Cronbach alpha.
The Cronbach alpha is a form of pooled correlation coefficient between the items in a multi-item scale. In other words, the Cronbach alpha is like a correlation coefficient but for more than two items. Before learning to generate the Cronbach alpha statistic in SAS, it is a good idea to learn the general guidelines for it, although note that these are somewhat debatable:
  1. If the items in a scale are related closely enough to have high internal reliability (and therefore to give you confidence to aggregate them as indicators of the variable) then the Cronbach alpha should be at least .60, preferably >.65, and ideally, >.75.
  2. If the internal reliability is poor (<.60), the cause may be one or a few deviant items that do not agree with the others. Removing these, so long as you have sufficient variables left over to still form a multi-item scale, is usually a suitable solution.
  3. A negative Cronbach alpha probably means that you have one or more reversed question items in the mix, and you forgot to reverse the data of these.
To generate the Cronbach alpha in SAS you can use PROC CORR again, this time with the extra keyword ALPHA. To run this, open and run the file “Code09b Reliability.” This file has two reliability analyses, one for each group of variables (the first for all the trust items, the second for satisfaction).
There are several considerations when running an internal reliability analysis, as explained next.

Specifying Variables

Your major task with internal reliability is to specify the variables to analyze.
Say that you wish to assess internal reliability for the four satisfaction items, as done in the second set of syntax. You will write in the variables to correlate. Be careful first to ascertain if you had reverse items. If you did, when you enter the new variables, use the newly-created reversed versions of such items, not original reverse items. For instance:
  • In the textbook case, Satisfaction04 is a reverse item. I therefore needed to create a reversed version of it, which I call (“Rev_ Satisfaction04”). Chapter 6 illustrates how to do this in SAS.
  • If you look at the file “Code09b Reliability” you will see that when I enter the variables to correlate, I exclude the original reverse Satisfaction04 item and include the reversed Rev_ Satisfaction04.
Many beginner statisticians make the mistake of trying to do Cronbach alpha statistics on multiple, unrelated columns of data, for instance, putting all the trust, satisfaction, enquiries and sales scores together. This would be wrong. The concept of reliability is to see whether a related subset of variables are so highly intercorrelated as to suggest that they can be aggregated together. Only use subsets of the same construct as done here.

Assessing the Reliability Output

The important parts of the output can be seen in Figure 9.3 Salient pieces of SAS Cronbach alpha output (SAS has other tables too but we’ll focus on these).
Figure 9.3 Salient pieces of SAS Cronbach alpha output
You would typically consider assessing the output by starting with the Cronbach alpha statistics and correlations. When assessing the internal reliability of a multi-item scale, you preferably want an overall Cronbach alpha >.65.
  1. If the Cronbach alpha is good (>.80): You can probably approve the reliability of the scale and proceed to aggregate the items into one score per observation as discussed in the next section. In the first table of Figure 9.3 Salient pieces of SAS Cronbach alpha output the overall satisfaction scale has an alpha = .72 which is good, although we would want to see if we can improve it.
  2. If the Cronbach alpha is bad (<.65): You need to assess whether this is caused by a few deviant items or if items do not fit together across-the-board:
    1. First, reassess whether you forgot about a reverse item in the scale. Reverse items where you include the original data instead of the new, reverse versions will throw out the reliability as they run in logic opposite to the others – they should be negatively correlated. If this occurs, create and include the proper reversed versions.
    2. Assess the “Cronbach Coefficient Alpha with Deleted Variable” table in the second table of Figure 9.3 Salient pieces of SAS Cronbach alpha output. The “Alpha” columns look at what the reliability would be if one of the items is left out. A much higher score in this column indicates that the item is causing poor reliability. For instance, in Figure 9.3 Salient pieces of SAS Cronbach alpha output we see that leaving out Rev_Satisfaction04 would lead to a big gain in alpha to .93; this item seems not be working well with the rest. You might consider omitting the offending sub-variable when creating the aggregated variable score. You may need to do this to a few variables.
    3. Assess the individual correlations between each pair of variables as shown in Chapter 8. If one item in particular is very poorly or negatively correlated with the others, it is likely a problem and you may wish to try leaving it out. Again, in Figure 9.3 Salient pieces of SAS Cronbach alpha output we see that Rev_Satisfaction04 has not only a low correlation with the rest but even slightly negative – not a good sign, perhaps we should remove it.
    4. If none or few of the variables fit together reliably, you must either choose to use only the one or two that have good ”face validity” (seem to represent the spirit of the variable best) and throw away the others, or omit the whole multi-item variable altogether.
  3. If the Cronbach alpha is moderate (.65 > alpha > .80): Usually this is OK, but to strengthen it, assess as in Step b above whether perhaps just one item is not fitting well, and if so, consider removing it.
As a challenge for yourself, analyze the Cronbach alpha yourself for the trust items, which would have run with the code. What alpha do you get? What conclusions do you come to, and do you change anything?
Once you have ascertained at least the reliability of a certain multi-item scale, you proceed to aggregate it as discussed next.

Step 3: Aggregate Multiple Items into a Summary Variable

If we are reasonably confident that the questions in a multi-item set are consistent, we generally use some method to aggregate all the answers in the set into one aggregate answer for that set. There are two major options for aggregating.
First is simple summation or averaging into a single score. For example, if there are five questions relating to satisfaction, we might sum or average the answers to come to an aggregated satisfaction score. Chapter 6 demonstrated how to do this.
Second is factor analysis and similar methods. Factor analysis can assess the items from multiple multi-item scales and explore whether they separate out into their different variables. For instance, say you have multiple survey items for each of four different variables (satisfaction, trust, service quality and support). We want two things for such scales:
  1. Convergent validity: This is similar to internal reliability, in that it assesses whether groups of variables get consistent and similar patterns of responses. Does it look like they seem to stick together and indicate a single underlying variable? (Therefore, are the responses to the satisfaction items convergent in that they are consistent?)
  2. Discriminant validity: Although we’re happy that the satisfaction items load together, we don’t want any satisfaction items to stick with any items from other variables like trust. What we really want is for each set of separate variable items to stick together in their own cluster, separate from the clusters of other variables. This is called discriminant validity.
There are two different types of factor analysis that can help assess, for multiple items from multiple variables, whether the items separate out into discernible sets of variable items (e.g. all the satisfaction items together, all the trust items together, and little overlap):
  • Exploratory factor analysis (principal component analysis and common factor analysis) goes ”looking for" underlying constructs. You present the statistics program with a list of variables and it sorts these into groups (factors) based on overlaps of correlation. This approach is most often used when you have no idea beforehand of what the factor/variable sets of items are expected to be.
  • Confirmatory factor analysis tests whether pre-specified groups of variables do belong in factors, and it forms the factors for you. In other words, you pre-specify that the following set of variables should belong to a satisfaction variable, the second set of variables are expected to belong to a trust factor, and so forth. The program will then tell you the extent to which your pre-specified pattern agrees with or fits with the actual data responses, based on variable correlations and overlaps.
Once you have a factor analysis solution that seems to work for your analysis (i.e. the satisfaction items form one discernible factor, the trust items another, and so on, perhaps with some acceptable overlap) then SAS can form single variable scores for each observation on each factor based on this solution. In our example, each survey respondent can now be given a single score for their satisfaction, a single trust score, etc. This is a more complex and satisfactory way of aggregating your multi-item scales into single variable scores.
This book does not discuss factor analysis further; the interested reader should follow up in more advanced and specialized texts.
Once you have completed the tasks of data checking, cleaning, and preparation into final variables, you can now proceed to actual data analysis, ranging from basic descriptive statistics and associations of final constructs (already discussed in Chapters 7 and 8) to more complex techniques such as regression discussed later in the book.
Last updated: April 18, 2017
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.75.217