SECTION 5

Uncertainty Around Measurement

The Problem

“We’d love to be able to show that the high-potential managers who went through our program are being promoted, are better leaders, and are not leaving the organization. We just don’t have access to data that would prove that. What data we can come up with feels worthless in comparison.”

“The goal of our training initiatives is to change the culture, but how do you measure culture change?”

“When we send out an email evaluation after a session, fewer than half the participants fill it out, and those that do seem to skew positive, because only those who were really into it take the time to do the online survey to provide feedback. We got better results when we used paper evaluations at the end of our classes, but we don’t have the capacity to enter and track that data anymore.”

“We ask participants a lengthy list of questions about every aspect of our training, get responses that are mostly favorable, and then we forget about them.”

What’s Happening Here?

Trainers of all experience levels, including the ones who shared their problems in this section intro, struggle to determine what data is worth collecting and how to do it in their organizations. Some trainers believe that because it is difficult to isolate whether changes to employee behaviors are the result of training, or of other factors in the workplace, that the data they can collect isn’t valuable. Or, because they can’t assign the new employee behaviors any monetary value, that it’s not worth collecting any data at all. Others lament that they are asking participants to complete lengthy evaluations when many of those responses get stuffed into a desk drawer. Finally, others are challenged to create assessments that actually measure whether learning objectives have been achieved.

Despite the many challenges new trainers face in the area of measurement, there are three reasons to evaluate your programs:

•   To enhance or improve them—that is, to learn from past mistakes or build on successes.

•   To maximize the transfer of learning to behavioral and subsequent organizational results.

•   To demonstrate the value of training to the organization.

The Kirkpatrick model is a well-regarded, widely used model for evaluating workplace training. It focuses on four levels of effectiveness:

•   Level 4—Results. How did the training affect key metrics?

•   Level 3—Behavior. Did participants use their new knowledge, skills, or attitudes back on the job? Did learning transfer?

•   Level 2—Learning. What were the participant results? Did they learn what they were supposed to and are they confident doing it?

•   Level 1—Reaction. Did participants find the program satisfying, engaging, and relevant, and do they intend to utilize what they learned?

These are listed from Level 4 to Level 1 intentionally. While the model was orignially created listing Level 1 first, it was later revised to consider the more important levels—4 and 3—first, and to address Level 1 last. Over time, the focus has shifted so significantly to Level 1 that many organizations are evaluating at this level alone. That may be because it is the easiest level of information to collect. However, the creators of this model did not intend for the focus to be on the lower levels, and they suggest that when talking about them, we do so in order of importance—reverse numerical order.

There are other models for evaluating training. Kirkpatrick’s is a good place to start because it is an industry standard, and because if you can collect solid data on these four aspects of your initiatives, it will reflect well on your efforts and improve your organizational reputation. Other models include a return on investment (ROI) measure, such as the Phillips’s ROI Methodology. To credibly evaluate the impact and ROI of a program requires more robust analysis, which is typically reserved for major training investments. Several of the Go Deeper resources in this section contain best practices for calculating ROI.

An evaluation experience in which I was successful at all four levels was with a program to train leaders on how to assign ratings that better distinguished strong performers from weaker ones during the annual performance review. The program involved a small group of leaders from within the same department. The hope was the training would deliver business results (Level 4) in the form of monetary savings and increased productivity. Before training, most employees were rated high across the board and thus received the highest merit increases, despite low performance. It was also believed that after receiving proper performance feedback, employees would perform better.

Because this was a small group, and because we had access to all their production and budgetary results, we were able to measure the Level 4 impacts. We also intentionally did not train another group of leaders that year to see if we could detect a difference in results. Costs associated with merit increases went down somewhat in the group that was rated by trained managers. The smaller-than-expected decrease resulted from the fact that we could offer notable merit bonuses to those who got them, rather than the disappointing level of increases we’d been able to provide in the past, when virtually everyone got one. Performance improved across the board among the direct reports who had been properly rated. Our guess as to why this happened was that employees realized they had to produce results to qualify for a merit bonus, and because rating the employees correctly during annual reviews was only the first step in a continuous process of targeted and accurate employee feedback.

Level 3 was also pretty easy to measure, because we were able to measure the range of ratings each manager had given to their direct reports for the prior five years. We noticed they were applying the learning (Level 3) after the workshop when we saw those numbers change drastically to indicate a real desire to reward high performers.

What may have helped us achieve the results we wanted to at Levels 3 and 4 was expanding our Levels 1 and 2 evaluations from the traditional reaction and learning measures to include measures of confidence and intention to make this behavioral change in the next performance review cycle.

The Challenges

Feeling uncertain about measurement? Look to this section for solutions to these evaluation conundrums:

Challenge 25. I’m not sure how to show the effect of our work on organizational imperatives

Challenge 26. I’ve never designed a knowledge quiz before

Challenge 27. I don’t know how to construct valid, reliable test questions

Challenge 28. I’d like my Levels 1s to be more than smile sheets

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.30.253