STEP 13:
LEARN WHAT STRATEGIES ARE USED TO FACILITATE INTERPRETATION OF SCORES

Enhancing performance or changing behavior is difficult, even when feedback on these areas has been comprehensive. The difficulty is partly because the feedback can be overwhelming, especially if there is a lot of it, if it is negative, or if it comes from many different perspectives. Without some map or guidance about what is most important to direct one’s attention to, the recipient can get lost, miss important data, or give up on processing the information.

Rather than just presenting self and rater scores, an instrument should build some kind of framework for corrective action into the feedback display.

There are at least seven strategies for helping managers understand their feedback:

•  comparison to norms,

•  highlighting largest self-rater discrepancies,

•  item-level feedback,

•  highlighting high/low scores,

•  comparison to ideal,

•  importance to job or success, and

•  “Do more/Do less” feedback.

These can be built into either graphic or narrative displays.

Any feedback display should use at least one of these strategies because without any of them, the manager has no way of knowing what to make of his or her scores. The better feedback displays usually use a combination of two or more strategies.

If well-laid out and clearly integrated, the use of multiple strategies may increase the meaningfulness as well as the impact of the data. This is because it will give the manager many viewpoints from which to examine the information and make it more difficult for the recipient to dispute negative feedback. Yet, past a point, more may not be better. For some managers, more information may be too much, especially if the display is comprehensive but not entirely clear. Again, it is up to you to decide whether the balance between information/impact and clarity/simplicity is right for your target group.

Comparison to Norms

Comparison to norms is probably the most widely used of all strategies. Norms usually represent an average for all people who have taken the instrument but sometimes are based on one specific target group (middle managers, for example). These averages are presented for each scale on which feedback is received so that the manager can compare his or her own scores to the norm. The manager uses this information to ask himself or herself: How am I doing compared to others? The more the norm group resembles the manager in terms of managerial level, the more relevant this comparison will be.

Norms can be displayed in a variety of ways. They can be listed as a separate column of data to be compared visually to one’s own score, or one’s own score can be plotted against norm data. When this latter strategy is used, the graphic display contains standard scores or percentiles. A standard score is a statistical computation that allows individual scores to be compared to the total distribution of scores by taking into account the mean and standard deviation of the population. A percentile represents the percentage of persons who have lower scores than the individual being rated.

Norms for diverse populations (for example, gender, industry, country) are usually located in the trainer’s manual. How often a vendor updates norms will depend on assumptions about the theory behind the instrument. If skills and abilities measured by an instrument are believed to be sensitive to changes over time, then norms should be updated on a regular basis. When this is the case, skills do not disappear from effective managers’ repertoire, but instead the relative importance of them changes with trends in organizations. For example, managers today are more aware of the importance of teamwork as organizations have become flatter in their structure. If, on the other hand, there is no reason to believe that the importance of skills and abilities will fluctuate over time and if there is statistical evidence to support this, then cumulative norms or norms that include everyone who has taken the instrument are sufficient. Newly developed instruments and instruments that have recently been translated will most likely use cumulative norms because only a small number of managers have taken the instrument.

Highlighting Largest Self-rater Differences

Although by definition all 360-degree-feedback instruments present self-ratings compared to others’ ratings, some feedback displays go beyond these comparisons to highlight scores on scales where self-rater discrepancies are meaningful. Managers often wonder how much of a difference between ratings is enough to be meaningful, and this type of highlighting lets them know. It can be very useful, especially when combined with rater breakout of data. For example, the ratings of self and peers on scales that concern managing people may be similar, whereas the ratings of direct reports may be significantly lower than self-ratings. This can happen because supervisory skills are often most visible to those being supervised.

Item-level Feedback

Though item-level feedback can be unreliable (that is, single items do not adequately represent complex phenomena), it can be helpful when used in conjunction with feedback on scales because items provide more detailed information. Learning that you scored low on “Resourcefulness,” for example, may not be as helpful as knowing how you scored on the specific items linked to “Resourcefulness.” Item-level feedback can give the manager some leads about what to begin to do differently in order to be more effective.

Highlighting High/Low Scores

High and low scores are sometimes presented on individual items and sometimes on scales. This type data can be presented as graphs or as listings with symbols to denote the highest and lowest scores.

Highlighting the high and low scores can provide participants with a quick overview of their strengths and weaknesses, especially when they are receiving a lot of feedback. The danger is that when high and low scores are item scores, what is presented can be a smattering of items from many different scales—which may be an unreliable summary of the data. The more useful approach may be to present high and low scale scores, with representative high or low items highlighted within scales.

Comparison to Ideal

Comparison to an ideal is not widely used. When used, it may represent a theoretical ideal for leadership or management, or it may be data provided by the manager himself or herself (for example, What kind of leader do I see as ideal?) or by raters (for example, What kind of leader do raters see as ideal?). Self or rater data on actual performance can then be compared to data from self or raters on the ideal leader.

Importance to Job or Success

Importance ratings may reflect perceptions of how important a skill or practice is to effectiveness in one’s job or to longer-term success in one’s organization. These data are usually collected from managers and/or from their raters when the instrument is completed. Sometimes raters rate each item or scale in terms of its importance (to the job or to success), and sometimes they choose a certain number of items and scales that are most important.

The use of this strategy can be very powerful in prioritizing which parts of the feedback may need greatest attention, especially when the manager and his or her boss agree. Managers who are rated low on several scales can check the importance data and focus their developmental efforts on areas that were seen as important for their short-term or longer-term effectiveness. When the manager and boss do not agree, importance data can provide a fairly non-threatening way to begin a conversation about the overall results.

Some caution should be used, however, in working with ratings of importance to job or success. Importance ratings may reflect people’s impressions of what has gotten executives to the top as much as real and critical leadership competencies. These ratings may vary depending on the perspective of the person providing the data (the manager’s boss, peers, or direct reports). Mid-level managers may rate importance for success differently than executives. And ratings may change over time with changes in organizational culture or business strategy. In addition, few instrument developers who provide this type feedback have assessed the reliability of importance ratings.

“Do More/Do Less” Feedback

“Do more/Do less” feedback is not used by many instruments but can be a valuable piece of information, especially when respondents are allowed to rate how frequently a behavior or practice is observed. “Monitoring the progress of direct reports’ work” is a good example. A high score would mean the manager monitors work frequently, or does a lot of monitoring. What the manager cannot know from this is whether it is good to do a lot of monitoring or whether he or she is doing the appropriate amount. Although norms will tell the manager how he or she is doing relative to other managers, they will not shed light on whether direct reports are satisfied with this level of behavior. Asking raters to indicate whether managers should do more or do less provides an opportunity to see their side of the picture.

Linking Strategies Together: Issues in Considering Impact of Feedback

Different combinations of feedback strategies will produce different kinds of impact on the manager receiving the feedback and may result in different conclusions being drawn from the data. Consider the blend of strategies presented in the feedback design, what that blend will highlight for managers, and what the impact will be. Again, maximum impact may not always be desired. The goal should be meaningful feedback with appropriate impact.

If you still have several possible choices once you’ve gotten to this point in the selection process, it may be worthwhile to take these instruments yourself, have them scored, and receive your own feedback. This kind of experience is especially important if you have any doubts about the impact the feedback will have or whether the depth of intervention will be appropriate. A second-best alternative is to fill one out, as an exercise for yourself only, on someone else. As you complete the questionnaire itself, consider the feel or nature of the items. Sometimes items which look just fine become unclear or awkward when you actually have to use them to rate a real person.

If you do have the opportunity to review your own feedback, pay attention to your own reactions to the competencies being measured by the scales, how rater data are broken out, and what graphic strategies are used in the feedback display to help managers understand and interpret their scores.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.22.107