Chapter . Setting Up the System

Let's assume your organization wants to develop its own feedback system. It's not easy. In fact, many consultants recommend not doing it yourself, simply because good 360-degree feedback systems are so new and so subtle, and because the outcome of a poor system can be so disastrous.

Even if you decide to hire a consultant to build the evaluation system, however, it helps to know how to put one together. This knowledge allows you to spot potential problems at each of the five steps:

  1. Design and plan the process.

  2. Design and develop the tool.

  3. Administer the instrument.

  4. Process and report feedback.

  5. Plan responses to the feedback.

Step 1. Design and Plan the Process

If your organization has decided to use 360-degree evaluations, then we can assume it has also decided who the ratees will be. Next, choose the raters. This is not as easy as it may appear.

Although the technical definition of 360-degree feedback says the evaluation should include input from all levels—above, below, beside, and outside—such massive, full-circle systems create a number of problems when implemented:

Too Many Forms

Conducting a multirater feedback system is complicated enough, without extra, unnecessary paperwork clogging the works.

Too Much Time

The more raters, the more staff-hours spent on the evaluation. Remember, some feedback forms can take an hour or more to complete.

Not Enough Knowledge

Some of the people in a full-circle evaluation may have limited contact with the ratee. Getting feedback from these people wastes everyone's time.

Of these three problems, the last is the most important. Feedback from people lacking specific knowledge of a manager's performance or work style can dilute the evaluation with hearsay and incomplete or inaccurate comments. Such feedback also wastes the organization's resources. Worst of all, poor feedback misleads the ratee and defeats the purpose of the evaluation.

For example, suppose your organization uses an open-call feedback system in which everyone in the organization is invited to rate the manager. Workers with strong personal feelings for that ratee may volunteer feedback, even though they've never worked for that manager. Their biases, either negative or positive, may cause the manager to adjust his or her behavior needlessly or even wrongly.

Using external customers is another tricky proposition. Some consultants say that a 360-degree evaluation isn't complete unless external customers are involved. But how can customers who may have never worked directly with a manager provide any useful feedback?

One way around these problems is to label this feedback “external customer” or “no direct contact.” Use such feedback as addenda to the actual 360-degree evaluation; don't link it with a performance appraisal or future performance evaluations. Simply allow the manager to see this input to “fill in the blanks” in the evaluation.

Other things to watch out for while planning the process are as follows:

Fairness. Is everyone playing by the same rules throughout the organization? Do all parties feel the process is fair? Even perceived unfairness brings in poor results. It also gives the ratee an excuse to ignore the evaluation's final analysis.

Timing. Are the raters going to be present to fill out the forms, or are they on vacation or in training? Are there performance appraisals or other evaluation events that need to be linked to the 360-degree feedback?

Confidentiality. Does everyone perceive that the process maintains strict confidentiality? A study by David Antonioni says that feedback systems in which the raters are known by the ratee produce inflated ratings. This ultimately means poor performance improvement.

Step 2. Design and Develop the Tool

The best feedback systems aim at the future by showing the present. Remember this as you start to develop or choose your feedback instrument. A good way to focus on the future is to develop the instrument from the organization's vision. For example, if the organization focuses on teamwork, then the feedback instrument should also focus on teamwork. If the organization values some other behaviors, then the instrument should emphasize those behaviors. The feedback instrument may touch on a variety of areas or behaviors; all of them, however, should ultimately support the organization's vision.

If your organization plans on linking the feedback questionnaire to the manager's performance appraisal, consider using the appraisal's categories or headings in the questionnaire. This will focus your efforts and make the link between the two easier to manage.

You can use facilitated group process to develop the actual questions. (See Info-lines No. 9406, “How To Facilitate” and No. 9407, “Group Process Tools,” for more information on group process.) Most of the same people involved in the evaluation—ratees and raters—know what is and is not important in the position of manager.

One easy way to build the list of behaviors is to create broad categories linked to the organization's vision, such as “customer first” or “teamwork.” Then break those headings into specific behaviors. For example, under “customer first” you can place behaviors such as “predicts customer needs” and “responds quickly to customer demands.”

Once the behaviors are defined, you need to create a response scale. David W. Bracken says that satisfaction scales—such as “very satisfied” to “very dissatisfied”—produce more helpful results than frequency scales—such as “always” to “never.” Bracken also recommends using six response choices. This is enough to measure subtle improvement over time and prevent the raters from taking the easy, middle-of-the-scale response. Also, provide a separate “don't know” or “not applicable” answer. This will ensure the feedback accurately reflects the feelings of the raters.

Include a section for open-ended questions. Managers can see why raters rated them the way they did and get clues on how to fix those problems. Bracken says the best way to use open-ended questions is to have the raters complete a sentence: “My manager should stop doing…”

Other issues to consider at this stage include the following:

Focusing on Behavior

Does the evaluation ask about behaviors or personality traits? Behaviors are specific and can be changed; personalities are often vague and probably cannot be changed easily.

Buy-In

Do raters and ratees feel that the listed behaviors are the most important ones?

Looking Toward the Future

Do the listed behaviors include not just what managers do now, but also what they should be (or will be) doing in the future?

Length

Is the questionnaire too long? Workers probably won't complete one that takes more than 15 minutes to finish. And incomplete and uncompleted instruments mean compromised results.

Step 3. Administer the Instrument

Now you have to decide who will answer the questionnaires and how to get all potential raters into the process. Pencil and paper is the simplest and most popular way to deliver a feedback instrument. Its downside is the difficulty of dealing with mounds of paper and the time it takes to enter all that data electronically.

Electronic data collection instruments—such as online, fax, electronic meeting support, or telephone— solve those problems, but they can create new ones if the raters either cannot use or are uncomfortable with these technologies. No matter which methods you use, make sure the raters feel that their answers are given in confidence. (See Info-line No. 9507, “Basics of Electronic Meeting Support,” for more information on this topic.)

To ensure that all chosen raters finish the questionnaires, set aside a few minutes of the workday for people to do their feedback forms. Even better, provide a time and place for delivering the instrument, and have an administrator on hand to answer questions and push for completed forms. At the very least, include a policy and intent statement with the form, so all raters know what is being done, why it is being done, and how to do it. Finally, if the raters are to mail in completed forms, provide postage and a return envelope.

Other things to watch out for include the following:

Accurate Coding

Does the rater know who he or she is rating? Preprinting the ratee's name or code on all forms helps ensure that a simple handwriting mistake can't misdirect data to the wrong ratee.

Incomplete Participation

Did you forget to include any appropriate raters— from off-site, other shifts, or other departments?

Too Many Forms

Are you going to drown in a sea of returned questionnaires? Limit the number any particular manager will get, but don't just set a random number as the limit. Managers in larger departments should get more raters.

Step 4. Process and Report Feedback

Once all the forms are turned in, you have to enter, compile, and report the data. This can be a daunting task, especially if all managers are rated at the same time. (This, by the way, is usually how it is done.) As stated before, one way to speed this process is to deliver the questionnaires electronically. Another way is to hire consultants to compile the data for you, even if your organization had developed its own instrument. Outside consultants also improve confidence that the process is fair, confidential, and legitimate.

As you collect responses, you have to decide how many is enough to generate a report. David W. Bracken recommends five as the ideal number. This means you do not report any questions with fewer than five responses. Of course, managers in smaller departments may not get that many. You can choose a smaller threshold, but be sure to inform all parties in those situations that confidentiality may be more difficult to maintain.

The report itself will usually include a list of the questions or behaviors and their scores. Another way to deliver the data is to report only category scores, rather than question scores. For example, report a score for the heading “customer first,” not “responds to customer needs quickly” and so forth.

This method is less confusing for the ratee. But it also loses some of the subtlety and specificity of a full report. In any case, be sure that the questions or behaviors in each category are actually related to one another. Bracken recommends using a statistical (or factor) analysis at the beginning of the process to ensure questions belong under a particular heading. (See Info-line No. 9101, “Statistics for HRD Practice,” for more information.)

If people will be conducting a performance appraisal using data from the evaluations, be sure to instruct them explicitly on how to use that data. Many consultants are uneasy linking opinions to salaries. Clear guidelines can help guide all involved parties through this tricky area. (See the section titled “Performance Appraisals” for more information on this topic.)

Keep an eye out for the following problems:

Inaccurate Transcription

Do your data entry methods ensure that the right answers go with the right ratees?

Editing

Are the open-ended questions being edited? (This may save space and time, but it can also delete crucial information and give the impression that responses are being censored.)

Slow Processing

Are questionnaires processed fast enough so you can include latecomers in the final report? This is especially important if you think some managers may not get enough responses.

Step 5. Plan Responses to the Feedback

After you have compiled all the data in a report, that rated manager must now turn the questionnaire results into an action plan for change. This process is not automatic. The best-designed and well-written reports may lead the ratee to a conclusion, but actually deciding what must be done is the rated manager's task.

To formulate an action plan, that manager needs detailed data. The data must be specific, clear, and related to the rated manager. The data should also reflect behaviors or issues that the manager can control or act on. The manager should also have the appropriate data interpretation skills. Seeing the data is only a start. The manager has to turn the report into an action plan, and the organization has a responsibility to help him or her do that.

David W. Bracken suggests three ways to do this:

  1. Facilitators. One-on-one discussions are best for obvious reasons. Facilitators can tell managers how to read the evaluation report, tailoring the discussion to individual managers. They can help managers write their action plans. And they can help monitor improvements over time. On the downside, using facilitators can be more expensive, time-consuming, and (in cases where many managers are being rated at once) unwieldy.

  2. Workshops. Group discussions can provide many of the same benefits as individual facilitators. The main problems with workshops include dealing with a number of very individualized problems at once and the possibility of sharing personal or embarrassing information with other managers.

  3. Workbooks. Printed materials are cheaper than facilitators, can be used at will, and act as a permanent record of what the organization expects of all ratees. Workbooks can also be rigid, unclear, or incomplete, however. Most organizations use workbooks with some sort of facilitated discussion for exactly these reasons.

Because there may be a number of criticisms in the report, the manager should choose the one or two most important items that he or she scored the lowest on. The manager should focus the action plan on these items. Once the action plan is complete, the organization should point the manager to the appropriate training and development resources. A particularly helpful (and expensive) technique is personal coaching. These professionals hold a manager's hand and guide him or her through the entire plan.

The manager should then hold a meeting with as many of the raters as possible. He or she should share the results of the feedback report with these people, without justifying or making excuses for any criticized behavior. Have the manager describe the action plan, give timelines if possible, and explain why the chosen behaviors were marked for change. It is important for everyone involved in such a meeting to be nondefensive, nonconfrontational, open, and polite.

Finally, issues to watch out for in this last step include the following:

Poor data. Does the manager have all of the data? Is it understandable? Does it measure behaviors the manager has control over?

Unfairness. Are all rated managers given the same access to help? If facilitators are used, don't forget about other shifts or off-site ratees.

Drifting. Is there a method to ensure ratees develop an action plan? This is particularly important if the only instruction managers get comes from workbooks.

Pitfalls

Multirater feedback systems do raise some concerns. Here are a few things to think about as you consider using one.

Too New

We don't know enough about the process to say how well it works—for example, how reliable is the raters' information, and does improvement in lowrated behaviors actually affect the bottom line? It is important to remember that 360-degree evaluations deal with real people's lives and personalities. You aren't simply measuring how many widgets their department can produce or how far under budget they are. You are rating these managers' core competencies—who they are.

Also keep in mind that multirater systems, like quality initiatives, will work only if everyone buys into the idea. If rated managers don't act or if the organization doesn't follow through and provide HRD resources for these people, then they run the risk of hurting workers' morale and trust. In such situations, it would have been better for everyone if the multirater system had never been started.

Inexperienced Raters

Many participants in a 360-degree evaluation may be unskilled in evaluation or observation techniques. They may have to rely on the instrument's instructions, which could very well be incomplete, inaccurate, or nonexistent.

Another problem is that multirater feedback is very memory dependent. The instrument's questions should cover the entire year. But a rater may either forget events that have happened or may allow recent events to color perceptions of past events. For example, an employee who thought a manager was tough but fair may change that response if he or she was recently counseled for chronic tardiness.

External Customers

Should you include external customers in your 360-degree evaluation? Technically, a 360-degree feedback system has to include external customers' responses. But their participation raises several questions.

Generally speaking, the customers' feedback rests on whether the organization has met their needs. An effective manager will have played some part in that customer service. But determining where or how the manager's influence comes into play isn't easy— especially if these customers had no direct contact with the manager. Also keep in mind that questions used by employee raters in the evaluation may mean nothing to an external customer: “My manager provides opportunities for me to develop on the job” is a useless criterion for a customer. To have a fair, accurate, and easy-to-analyze evaluation, every rater must use the same questions.

Look at your own organization. Do your customers have some direct contact with managers? Are the areas you are measuring customer oriented (for example, “knows nuances of the product”) or more employee oriented (“helps me in my professional development”)? The answers to these questions will tell you whether to include customer feedback in your evaluation, to put it in an addendum, or to ignore it.

Performance Appraisals

Although organizations usually use 360-degree and multirater feedback systems to give managers a picture of their management styles, more and more are looking for ways to tie the feedback to performance appraisals. This may seem sound on the surface. But linking people's perceptions to a manager's paycheck can raise concerns. What if personality conflicts tempt the employee into unfairly rating the manager? What if the manager, fearing a cut in pay, coerces his or her employees into delivering a better-than-deserved rating? Or what if that same manager acts as if he or she were in a popularity contest, perhaps ignoring organizational concerns in an effort to please subordinates? How can external customers fairly rate someone they may never have had any contact with?

Questions such as these may be slowing the rush to merge 360-degree evaluations with performance-appraisal systems, but they are not stopping it. One thing is certain: More research needs to be done in this area.

Benefits

The manager and organization get the following benefits from a 360-degree evaluation:

News

A multirater feedback system can uncover expectations, strengths, or weaknesses the manager may never have thought of.

Global Perspective

Multirater systems provide varied information from different, usually untapped sources.

Standards

The feedback can become a performance benchmark in a manager's performance appraisal.

Accountability

The person being rated is responsible for improving his or her skills and behaviors.

Efficiency

A 360-degree feedback system is relatively inexpensive, simple to implement, and quickly completed in a timely fashion.

Perhaps a better way to view 360-degree management is as a source of information that can make managers manage better. Researchers Joy Fisher Hazucha, Sarah A. Hezlett, and Robert J. Schneider determined that highly rated managers advanced further (in terms of pay) than lower-rated managers. If 360-degree feedback can improve management skills, then the technique can lead to career advancement.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.71.28