Furthering the Process

We have already mentioned that approaching leadership development as a process rather than an event can significantly enhance long-term impact. Periodic follow-up is an important ingredient of the process because it provides feedback that can extend and add to the developmental experience.

Follow-up can create motivation in the form of a challenge.

There are, however, some do's and don'ts when it comes to the specifics of following up:

Follow-up should

  • Be scheduled and intentional, not random and out of the blue.
  • Focus on being developmental rather than evaluative or, worse yet, punitive.
  • Be informative and educational.
  • Be targeted at the specific behaviors that the individual being tracked has been working on.

Evaluation's Theory of Relativity

If you ask ten people whether popular music has changed in the past twenty years, you probably won't be surprised when they all answer yes. It's a question that can be looked at objectively and answered easily by comparing the music that tops the charts today with the hits of twenty years ago.

However, if you ask a different question—How much has popular music improved in the past twenty years?—you will likely receive ten different responses. This question has no right or wrong answer because it is subjective and depends more heavily on context and frame of reference than the first question does. Factors such as age and culture might influence a person's opinion on whether today's pop music is better than that in the early 1980s. So it would be hard to come up with a conclusive answer to the question of whether pop music has improved.

A similar difficulty confronts those who try to measure the amount of improvement resulting from a leadership development intervention. Definitions of what constitutes a successful intervention can vary from one person to the next. For more than two decades a great deal of research on this question has been conducted. Although this research has shed light on the problem and helped evaluators of competency and skill development measure change more accurately, there still is no clear-cut solution. One reason for that is a methodological phenomenon called response-shift bias, illustrated in the following story:

Anna, a manager, is preparing for a leadership development program by completing the preprogram surveys. She considers herself to be a fairly good leader and gives herself a rating of 6 on a scale of 1 to 10. During the program she receives feedback from her boss, peers, direct reports, and others. They indicate that Anna has some significant problems to overcome before she can consider herself an effective leader.

Anna also learns during the program that leadership is a much more complex concept and entails a much wider set of behaviors than she originally thought. At the end of the program, she sets some development goals for herself and returns to work. After some time has passed, both Anna and her colleagues notice that her leadership abilities have improved substantially. But when Anna completes a follow-up self-evaluation, she gives herself a rating of 5 out of 10—lower than her score before the program.

An observer looking only at the pre- and postprogram scores would conclude that Anna's leadership skills declined as a result of the program. Companies that send their executives and managers through leadership development programs often want “proof” of the value of the programs in the form of hard numeric data and give little credence to anecdotal evidence. So even though Anna's real progress would have been clear from her testimony and that of her colleagues, in many organizations that testimony would be dismissed or ignored and an effective leadership development program might be cast aside.

Response-shift bias has been most thoroughly studied in self-report data, but because the expectations of others are doubtlessly affected by the knowledge that someone has gone through a leadership development intervention, it follows that this effect can skew the ratings given by colleagues as well.

Response-shift bias and its impact can perhaps be best understood by looking at three types of change that have been labeled alpha change, beta change, and gamma change.

Alpha change is true change. If Anna's self-rating after the leadership development program had reflected her actual improvement as a result of the program, it would probably have been a 7 or an 8 on the 10-point scale. Alpha change is difficult to capture, however, because of the other two types of change.

• In beta change, the rating scale is in effect recalibrated between the preprogram evaluation and the postprogram evaluation. In other words, the expectations for the person being rated are altered. Let's say Anna's boss had given her a rating of 4 before the program, indicating a need for improvement. The boss expected Anna to return from the program with dramatically improved leadership abilities, and that established a new baseline. So even though Anna's new leadership behaviors might have qualified for a rating of 6 from the boss before the program, after the program the boss rates them only a 4 because of these inflated expectations. A comparison of the pre- and postprogram scores then creates the impression that Anna has not grown as a leader when she actually has.

Gamma change occurs when the entire meaning of what is being evaluated shifts in the respondent's mind. In Anna's case, her perception of the range of behaviors that constitute leadership was radically altered during the leadership development program. Before the program she thought leadership comprised six skills, including the ability to motivate others and having vision. After the program, however, she understands leadership as encompassing ten dimensions. Before the program she considered herself to be quite adept in the six areas she knew about; thus her self-rating of 6. Now, even though she believes she has improved on or is doing well at seven of the ten dimensions, her realization that she needs work on three more causes her to give herself a rating of 5. So her actual improvement is not evident to someone looking only at the numeric data.

Follow-up results should not

  • Be used to make personnel decisions.
  • Be kept from the person being tracked. Knowing that he or she won't be able to learn from the results, the individual will likely become disenchanted with or cynical about the whole developmental process and may not take the follow-up seriously.

One question that arises in discussions of how follow-up should be carried out is: What exactly is meant by periodic? Is it best if follow-up comes within three, six, or twelve months after the initial leadership development experience? should it continue indefinitely? Going by the theory that development is best approached as a process rather than an event, program planners and participants need a way to induce momentum between developmental experiences and to tie them together so they feel like a process and not a sequence of events. At the same time, changes—in self-awareness and in behavior at both the individual and organizational levels—take time to set in to the extent that they can be measured. So the periods in periodic follow-up need to be long enough to measure real change but not so long as to lose the developmental flow. As for whether follow-up should be continued indefinitely, that largely depends on the type of assessment used, because some assessments have limited effectiveness in repeated use but can be powerful when combined with complementary methods. Using the right variety of assessment batteries, combined with an individual and organizational executive development plan, can be an integral part of a company's efforts toward management development, succession planning, and organizational competitiveness.

Definitions of what constitutes a successful leadership development intervention can vary widely.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.137.169