STEP 6:
LEARN HOW ITEMS AND FEEDBACK SCALES WERE DEVELOPED

Instruments that assess managerial competence or leadership effectiveness are dealing with complicated phenomena. These phenomena cannot be adequately represented by a single behavior or characteristic because they are comprised of many closely related behaviors and skills. To adequately measure these complex capacities, instruments must have scales that are made up of several items.

The process of instrument development typically begins with the writing of items that represent behaviors or characteristics believed to be related to effective management or leadership.

Items can come from a variety of places. Sometimes the author refers to a theory (leadership theory, theory of managerial work, competency models) to develop specific behavioral statements or statements describing characteristics or skills. At other times researchers create descriptions of characteristics or skills based on data they have collected. Another way items can be written is by basing them on the organizational experience of the author(s). People who frequently work in training or consulting with managers may feel they can capture in a set of statements the essence of the leadership or management effectiveness they have observed.

The better instruments tend to be those that have used a combination of approaches in their development. A basis in theory provides an instrument with a set of validation strategies, while empirical research can provide data from working managers. Ultimately, the quality of the final product depends on a combination of the quality of the theory, research, and experience of the developer; his or her skill in translating theory, research, and experience into written items; and the attention paid to instrument development and feedback design. A complete evaluation on your part will reveal the level of quality at all these stages.

The nature of items can vary, regardless of their origin. Items can be phrased behaviorally (for example, “Walks around to see how our work is going”), phrased as skills or competencies (for example, “Is good at influencing the right people”), or phrased as traits or personal characteristics (for example, “Is highly motivated”).

Instrument feedback is usually presented to the target manager as scores on scales (groups of items). Because scales tend to be more abstract than items (for example, “Resourcefulness”), it may be difficult for target managers to set goals for change based on this type of data. To help managers process their data, some instruments provide scores on the individual items that comprise these scales.

Feedback on behavioral items may be easiest for managers to use in setting goals for change because they are the most concrete. Behavioral changes are the easiest for co-workers to see as well. Change on this type item, however, can be the most superficial in terms of enhancing personal development. At the other extreme, feedback on characteristics such as motivation can be the most difficult to use, and change on this type item can be the hardest to observe. But change on these items may be more likely to enhance personal development. Feedback on specific skills probably falls somewhere between these two extremes: It is moderately easy to use when changes are observable and it involves some real skill development.

The items discussed above are good examples. If one receives a low score on a behavioral item such as “Walks around to see how our work is going,” it will be relatively easy to change (that is, “Walk around more”) but will probably lead to little in the way of personal development for that manager. If one receives a low score on a skill-based item such as “Is good at influencing the right people,” it will be harder to change, because the manager will have to find out how to become better and then will need to improve. But the result can be important skill development. Finally, receiving a low score on an item such as “Is highly motivated” can be the hardest of all to change. Change will require the manager to reflect and discover why motivation is low, and to decide what it will take to feel more motivated. Then the manager will have to make whatever personal or life changes are necessary. This kind of change, however, can be the more developmental.

If individuals are left on their own to process feedback (no trainer or facilitator is available), or if an instrument is not accompanied by comprehensive interpretive and development materials, the clarity of item content is critical. The harder items are to interpret, the more difficulty managers will have in benefiting from the feedback and the more important the quantity and quality of support becomes.

Once items are created, instrument development proceeds to the task of constructing the scales on which feedback will be given. Multiple items are grouped together to represent the set of closely related skills or behaviors that make up a managerial competency (for instance, “Resourcefulness” or “Planning and Organizing”). Responses to the items on a scale should group together to form a coherent whole, internally homogeneous and distinct from other scales.

How the scale-development process is conducted is critical, because the resulting scales will form the basis of the model of leadership, management, or effective performance that you will be presenting to the manager. Your task as the evaluator is to discover whether the process of scale construction seems reasonable and complete. To determine that, you will need to look in the technical manual or in published technical reports.

There are typically two aspects of scale development: the statistical and the rational/intuitive. The statistical aspect involves using procedures such as factor analysis, cluster analysis, or item-scale correlations to group items into scales based on the degree of similarity in response patterns of the raters (for instance, people rated high on one item are also rated high on other items in that scale). The rational/intuitive aspect involves grouping items together based on the author’s expectations or experience about how different skills or behaviors relate to one another. Some instruments have used one of the two processes in developing scales from items and some have used both.

Look for some evidence of statistical analysis such as factor analysis, cluster analysis, or item-scale correlations. Although it is not important that you understand the details of these statistical procedures, it is critical to realize that their goal is to reduce a large number of items to a smaller number of scales by grouping items together based on how these behaviors or characteristics are related to each other and allowing for the deletion of items that are not working.

For example, one way of determining how well the items relate to the scales being measured is by examining the relationship between the individual items that comprise each scale and the overall scale scores. High item-scale correlations indicate that the chosen items do indeed relate closely to the scales being measured. On the other hand, items with low item-scale correlations should be dropped from the scale.

Also look at whether items grouped together by the statistical techniques make sense. Because these techniques group items according to their statistical relationship, items that are conceptually unrelated may end up on the same scale. For example, “Being attentive to the personal needs of direct reports” may be empirically related to “Maintaining an orderly work space,” not because these two skills are conceptually linked but because the managers used in the analysis happened to score high on both. An intuitive look at scale composition can weed out item groupings that make little sense.

On the other hand, if feedback scales appear to have been created solely by intuitive means, be aware that there is no data to show, in fact, how well the behaviors, skills, or traits actually work together to measure a more abstract construct—the question of whether these item groupings represent actual competencies among managers remains unanswered.

An important consideration in the development of items and scales is the issue of customization. Some instruments use an “open architecture” approach that allows items to be added on client request. These items are usually chosen from what is known as an “item bank.” Although this feature is designed to meet customer needs, there is currently professional disagreement about the degree to which this practice reduces the integrity of the instrument or adds to the knowledge base about the emerging demands of leadership.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.175.180