STEP 6: Make the Program Accountable

Development-planning programs are supported because they make a difference in critical ways. It is up to you to demonstrate that difference by collecting the data and doing the evaluation that convinces senior management that the program is worth the cost. Program evaluation may indeed be the “slough of despond.” What is possible often falls far short of the desirable. There may be little interest in evaluation. The criteria may be difficult to agree on, much less measure.

Nonetheless, the long-term success of a development program (not to mention your own success) depends on demonstrating what value-added component the program brought. That value will be very organization specific—your criteria are not likely to be the same as those used in another organization. But there are common indicators, common evaluation processes, and common approaches to guide your efforts, as well as urge you on.

Whether it is even possible to evaluate your program will depend on the clarity of your program purpose and outcomes which, you will recall, was Step 2 of this process. For example, if the purpose of the program is to increase employee satisfaction, then some measure of employee satisfaction must be included as an outcome criterion. Likewise if the purpose of the program is to prepare high-potentials for promotion, then some measure of that must be a criterion reflecting, for example, whether high-potentials who go through the program get promoted when positions become available. The outcome measure must reflect program purpose.

However, outcome evaluations are more interpretable if you have built a process evaluation into your program and if you are able to fold those findings back into your program on a continuous basis. Consider the following two examples:

Fifteen people participated in an executive development program where they were provided with feedback, instructed on how to write a development plan, and asked to take the plan and meet with their boss as the first step toward implementation. Twelve months later, a post-test evaluation tool was administered. To the chagrin of all involved, very little change had occurred in these executives. The discouraged program sponsors withdrew their support, and shortly thereafter the program folded.

In a second company, the same feedback and action-planning took place, but three months after the event the program designers called all fifteen individuals to ask them if they had shared the plan with their boss and how the boss had responded. Five people said they had never met with their boss. Seven people said their boss had been either noncommittal or cynical about their plan. Three people reported an experience as anticipated by the program designers, which pointed to the need for changes to the program before bringing any more executives into the feedback process and before engaging in any post-program evaluation activity. The program designers changed the program to only allow participants in the program who had been nominated by their boss. They personally met with individual managers to help them frame their role, and they set up a three-way development plan meeting with the boss and the executive to facilitate the presentation and pass off the action plan to the line manager. They also created a course for the managers on how to coach and give feedback.

These examples illustrate that evaluating programs of development occurs on many levels. Evaluation is the last step in an existing program but is also the first step in the next program. Evaluation is outcome based—but to have interpretable outcomes, evaluation must also be process based.

However, evaluation is not only conducted at the organizational outcome and process level, it must also be conducted at the individual level.

At the individual level the program designer and the executive will probably want to know how much change has taken place from time one (perhaps when feedback was collected) to time two (six-to-eighteen months later). There are several ways to gather this information. The simplest method is to conduct interviews with the participants and their bosses, subordinates, or peers and ask them if the participant has changed in a particular direction. An important point to remember if you do interviews about a development goal is to obtain permission from the participant to talk to his or her boss, peers, and direct reports about the goal. One approach is to obtain this permission in writing during your initial program. In addition, you should guarantee the participants and those you interview both anonymity and confidentiality. While this method is the most straightforward, it is time-consuming.

A common evaluation method is to administer the 360-degree-feedback instrument a second time, after the executives have had a chance to develop. Simple to do, the results from such a testing are surprisingly difficult to interpret accurately. There are many statistical artifacts involved in pre/post tests that may mask real change. If your organization is large enough and receptive to a rigorous evaluation, you may want to do a formal evaluation—complete with control groups. Unless you are an evaluation expert you will want to get help with this methodology. Many vendors of training-and-development tools can provide (for a fee) help in designing a program evaluation. Assistance may also be available from a local college, either from the students, the faculty, or both.

Implementing and evaluating a program of development planning is time-consuming, difficult, and often the results are not immediately evident. One way to maintain the momentum in the organization (and yourself) is to take advantage of small wins. Set small, achievable milestones for your program rather than a global, long-term goal. A first goal might be to provide feedback to a group of executives to demonstrate to the organization that collecting and providing confidential data is possible. Your next goal might be to have a group of senior managers discuss the skill levels of their managers and define a program purpose, to increase the skills. Setting program design goals with evaluation and feedback is just as important as the more global goals that you set for the program overall and for the individuals in the program.

For more information, see:

McCauley, C. D., & Hughes-James, M. W. (1994). An evaluation of the outcomes of a leadership development program. Greensboro, NC: Center for Creative Leadership.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.88.249