6
Pillar #4: Tell It Like It Is with Truthful Measurement and Evaluation

93

There’s an old but still poignant story that goes like this: A young newly wed couple was preparing a large ham to roast for Sunday dinner. As the wife was putting the seasoned ham into the pan for baking, the husband asked: “Aren’t you going to cut the end off it?” The wife asked him what he meant and why she should do that. The husband replied that cutting off the end from the ham was the way he had always seen his mother cook hams. Her cooking confidence now shaken, the wife called the husband’s mother to query her about this ham-lopping practice. The mother replied that she had learned this from her mother but couldn’t recall ever knowing the reason. The new bride then called her mother-in-law’s mother, who verified the legitimacy of the practice and said she learned this from her mother; although she continued to cut the end off 94to this day, she also couldn’t say with certainty the reason for this practice. Fortunately, though very old, the grandmother (the bride’s husband’s great-grandmother) was still alive, well, and surprisingly lucid. A call to her cleared up the origin of the ham-trimming ritual. In an age-quaked voice, the grandmother explained that she had regularly roasted a ham for her family’s Sunday dinner. She went on to explain that they had a very large family and so she had to purchase a large ham to feed everyone. Because the ham was always so large, it wouldn’t fit in her only roasting pan. And a larger roasting pan wouldn’t have fit in her small oven. Therefore, she did indeed trim the end off—so it would fit.

Over our many decades of work with training departments and leaders around the world, it has struck us that a good deal of the rationale for measuring training is a bit like the ham-trimming ritual. The measurement ritual is based on an unquestioning adherence to the professional assumption and belief that we do it because we’re supposed to. Evaluation of training is often performed as some sort of Human Resources and Development (HRD) good-housekeeping practice: ritually prescribed and recommended—although in reality, more talked about than done.

Courageous Training leaders in our experience have confronted the measurement and evaluation assumption head on. They are dissatisfied with current practices and see a lot of wasted and nonproductive effort—a lot of wheel-spinning, but not a lot of forward progress toward valid and useful evaluation results.

In this chapter, we advise all training leaders to seriously question the what and why of evaluation, challenging the ritualistic but nonproductive practices that tend to be followed. The message that runs through this chapter is quite simple: the goal of training evaluation is not to prove the value of training; the 95goal of evaluation is to improve the value of training. More specifically, we suggest how training leaders can take courageous actions on several fronts:


  • Rethinking the Kirkpatrick prescription (described below)
  • Exposing the false god of return on investment (ROI)
  • Confronting the folly of seeking credit
  • Telling the truth about training impact (and the lack of it)
  • Accurately describing the real causes of training failure
  • Getting information to the people who can—and should— do something with it

RETHINKING THE KIRKPATRICK PRESCRIPTION

Virtually all HRD practitioners are familiar with the four-level evaluation framework first articulated by Donald Kirkpatrick in 1957 and written about extensively (Kirkpatrick 1976; Kirkpatrick and Kirkpatrick 2006). Kirkpatrick was a true training evaluation pioneer and made a tremendous contribution to the HRD profession by developing this taxonomy fifty years ago. This hierarchy explains that at the first level we assess learner reactions, learning mastery at the second level, application of learning on the job at the third level, and value to the sponsoring organization at the fourth level. Later, Jack Phillips (2003) added and defined “Level 5” as measurement of the ROI, describing this as the ultimate level of evaluation for training.

Kirkpatrick and Phillips continue their fine work and others build on and refine it. Our Courageous Training colleagues have no quarrel with the general concepts of Kirkpatrick and Phillips. We also appreciate the attention that their work draws to the whole issue of training value and how it should 96 be measured. But we have some serious concerns about their approach.

We have noticed a rampant belief among training practitioners—a belief that Phillips himself has promoted (Chief Learning Officer Magazine 2003)—that training departments should follow a suggested formula stipulating the percentage of their efforts that should be subjected to each of Kirkpatrick’s four levels of evaluation. According to this guideline, HRD departments would evaluate 100% of their programs on participant-reaction (Level 1), about 60% on learning mastery (Level 2), 30% on behavioral application (Level 3), and 10% or less on impact and ROI (Level 4).

Courageous Training leaders operate with a different perspective. They know that the last thing that should drive which efforts get evaluated and how they are measured is a rote formula, no matter how well-intentioned. The first, final, and only legitimate arbiter of what and how to evaluate is “why”—what purposes are to be served, and why are these purposes important?

We fear that evaluation practitioners often decide which level of evaluation to use in their analyses and studies based principally on which levels of evaluation are easiest and least expensive to do, rather than on the value that can be derived from the information. For example, Level 1, or participant reaction, surveys are indeed easy to design, administer, and analyze. But they are largely superficial in most instances and, in our experience, have virtually no relationship to the amount or relevance of new learning achieved, and even less to do with whether trainees will ever use their learning in on-the-job applications that will yield valuable organizational outcomes.

Courageous Training leaders think carefully about why they need the evaluation data from any of the levels and what decisions the data will help them make. For example, with a new 97training initiative or with the launch of a new training vehicle, such as podcasts, it may be very important to know how trainees react to it (Level 1) and to collect their opinions for making it easier to use.

Level 2, evaluation of learning, should probably be used in all training programs. First off, feedback is at the heart of learning. Testing for new knowledge helps learners consolidate new knowledge and helps them gauge their mastery so that they can remediate if needed or achieve a sense of satisfaction if they have mastered something new. Test results also help a training leader determine whether more, or less, instruction is needed. In our view, if it’s worth teaching in the first place, it’s worth knowing if it was learned. So, if any formula is applicable, it ought to be 100% of training gets evaluated at Level 2. Such Level 2 evaluation can take many forms. It may be something that occurs as a formal or separate component at the end of or after the training. It may be something that is less formal (although no less objective) and may well be done as part of the training. The key issue is that both the learner and the organization/facilitator know where the development stands after training.

Return on training investments all comes down to the simple formula we have described in Chapter Three and is driven directly by the number and proportion of trainees who actually apply, on the job, the learning they have acquired from training interventions. It’s simple: if they use it (assuming it is worth using in the first place) then it pays off; if they don’t, the learning investment is mostly wasted.

So, when it comes to advice on how many learning initiatives ought to be subjected to Level 3 evaluations (i.e., behavioral application), a Courageous Training leader will ask, first, how important it is to the business that trainees actually apply their learning on the job. We have worked with plenty of learning 98programs wherein the company is “betting the business” on an expectation that the training will actually work. A company that is launching a new product in a new market that could drive a major increase in sales revenues, for example, cannot afford for people to ignore or otherwise not use their new product training. Or, a company that is struggling to attract and retain talent cannot afford for managers to ignore or otherwise not use their new leadership or coaching skills. In cases like these, evaluation should certainly be done to find out who and how many are using their training and who and how many are not. The evaluation should dig deeper into the factors that are enabling application, and those that are impeding on-the-job application. If the need for the training is driven by needs for on-the-job performance change or improvement, then by all means it is worth finding out if this is happening—and if not, why not.

Regarding evaluation beyond Level 3 to assess value to the business and even ROI (Level 5), we come closer to agreeing with the “10% or less” guideline. The real value and impact of training hinges on behavioral application. If people use their training, then ROI and business impact should be a given. If people are not using their training, then lack of business impact and ROI are likewise a given. Since application of learning on the job—aimed at important factors and issues—is the make-or-break linchpin for business impact and value, the greatest evaluation attention should be paid to Level 3—when, how, and why (or why not) are people using their learning? Occasional forays into evaluation beyond this level may be needed for several reasons, such as to gather evidence and examples of the value and impact of learning applications, to support arguments for sustained or increased investment, to defend budgets, to market the training by making claims for its value, and so forth.

99

EXPOSING THE FALSE GOD OF ROI

The past few years have seen a huge amount of action and anxiety around “proving” the value of training and demonstrating return-on-investment, or, ROI. The original ROI concept is absolutely right-minded and valid: the training must be worth the investment it required. No business can afford to spend money on training that will not return value, and there must be a defensible business rationale for each and every training expense. Courageous Training leaders are especially and wholeheartedly in favor of asking sharply focused ROI questions at the time a training initiative is being proposed.

But pursuit of ROI metrics has taken on a life of its own. For many practitioners it has come to mean that “training should make money” for the business or the corollary assumption that the training with the greatest quantifiable ROI is the best for the business. This is not true. Courageous Training leaders recognize that the goal of training is not to “make money” but to build the capabilities of employees who help the organization make progress toward its business goals and strategies.

Consider this example: A rapidly growing and highly profitable organization is facing a severe talent shortfall and is losing critical employees who are being raided by competitors. To execute its business strategy successfully, this company must do everything in its power to retain its highly skilled and valuable employees. Now imagine that there are two training programs being offered.

Training Program A teaches employees how to identify wasteful activities in their work units and implement revisions that save money. Trainees use their training and make cost-saving changes that lead to an average savings of $4,000 per100 participant in just the first three months of the year after the training. Audits by the finance department verify that this training achieved an enviable ROI of 400%.

Training Program B teaches supervisors the actions and changes they need for forging better relationships with their employees. Nearly three months after its launch, this expensive program is beginning to work as intended. Supervisors and employees both report a number of improvements, including more open and productive dialogue between employees and managers, the establishment of clearer goals, and an increase in coaching. But after three months, attrition is still occurring—although it has slowed down a bit—and there is no evidence of any other financial outcomes. ROI assessed to date is terrible—a huge net loss.

While both programs have value, Courageous Training leaders will argue that Program B has far more value to the organization than Program A—despite its inferior ROI data—because it is directly addressing a critical need of the organization that, unless successfully resolved, will prevent it from executing its strategy. The ROI data from Program A are impressive, but that result has far less strategic value to the business. However, a misguided training manager obsessed with showing ROI for training might have reached exactly the opposite conclusion. The point? The purpose of training is not to “make money”; it is to support the organization in executing its strategy and achieving its goals. Should it do this at a cost-effective level? Yes, of course, and bold training leaders will always try to trim costs from training processes. But they will also make decisions on the value of training based on the extent to which their results are needed for the business to succeed, not on narrowly conceived estimates of ROI.

Courageous Training leaders, being the business-goal bulldogs that they are, always raise ROI questions at the outset of a training request: Is this training needed for our business? Will it101 provide the results we need to achieve business success at a cost we can afford? What should we measure to know if the training is providing value? But they avoid the blind pursuit of ROI evidence after the training is completed, as if it were the prized brass ring. They recognize that the real value of results data is in being able to make the connection between training, system performance factors (especially manager behavior), and the business impact. They are not tempted by the false gods of ROI that often lead them to misguided, unnecessary, and self-serving efforts that deflect attention from the real value issue: learning how to get more people to use their training.


CONFRONTING THE FOLLY OF SEEKING CREDIT

As we discussed in Chapter Four, successful training works as a process that stretches far beyond the boundaries of the learning event. Successful training interventions involve several key partners in nontraining roles, especially trainees, managers, and senior executives. If the training works, it works because all of the key players played their parts in the process. If it doesn’t work, somewhere a link in the value chain broke. This is the reality of training.

Contrary to this reality, many approaches to evaluation are aimed at parceling out the independent value that can be attributed to the training. The popular ROI processes, for example, include a formulaic calculation for ascribing the proportion of the impact value for which training can take credit.

Savvy training leaders know that training is a fully partner-dependent process; all the partners in the process must do their part to make it work, and it is folly (and self-serving) to try to attribute the lion’s share, or any share for that matter, of credit to any single one of the partners. Courageous Training leaders are 102 ready and willing to stand back from the spotlight of praise and to acknowledge the achievements of the partnership. They know, too, that clouding the principal message (i.e., that it takes a partnership to achieve business impact) by attempting to show how much of the credit training should take will come across as naïve, self-serving, or defensive. Furthermore, it will undermine the trust of the partnership that is needed so the training process will continue to work and contribute to the organization’s success.

The greatest value from evaluation comes not from proving the value of training, but from improving the value of training— by discovering what is working and what is not so that changes can be made to drive continuous improvement. Sometimes improving involves proving, as stakeholders need to know what they are getting for their money, time, and energies. In such cases, it is necessary to measure training impact and value. If, for example, an important training investment is not paying off, pointing out the lack of benefit serves to capture the attention of key stakeholders and can allow the training leaders to engage them in a process of problem solving. By the same token, drawing attention to great achievements serves the same purpose, helping all stakeholders notice and appreciate the roles they played and how their actions helped contribute to success.

Sometimes less strategic purposes for evaluation, such as demonstrating participant satisfaction, supporting budget requests, and so forth are needed. Courageous Training leaders will pursue these purposes when they are necessary, but they will never let these tactical efforts supersede or displace evaluation endeavors aimed at improving training effectiveness. Nor will they ever use evaluation to seek credit for training as if it alone were responsible for worthy achievements.


103

TELLING THE TRUTH ABOUT TRAINING IMPACT AND THE LACK OF IT

Training gets predictable results. There will almost always be some trainees, though proportions may vary, who end up applying their learning in ways that add value. And there will be another proportion who do not use their learning to change or improve performance and for whom training was mostly a waste. For this reason, commonly used evaluation methods that aim to calculate averages will almost always misrepresent the reality of training impact.

The mean or average, as we all know, can be very misleading. If, for example, Bill Gates, Chairman of Microsoft Corporation, were in a room with 1,000 homeless and destitute people, the average net worth of those individuals would be about $40 million. But to report that all people in the room are doing well economically would be a deception.

In the same way, it can be dishonest and misleading to report an average impact of training figure, because those few trainees who used their training to accomplish some very valuable results may mask the fact that the larger proportion of trainees got no value at all. The training leader at a member company in our user group, for example, was leading a large and strategically critical training initiative to help managers and directors employ more marketing concepts and tools in their business plans and decision making. They discovered from a study using the Success Case Evaluation Method (Brinkerhoff 2003) that just one of several dozen trainees had used the training to directly increase operating income—an increase of a whopping $1.87 million. It would have been very easy (though our Courageous Training leader from our user group did not succumb to the temptation) to calculate an average impact estimate, which would have made 104 it look as if the typical participant had produced close to $100,000 of value from the training—well above and beyond what it had cost. And indeed, had this training function employed one of the typical ROI methodologies, that is exactly what they would have “discovered” and reported.

Instead, our bold friends in this case happily reported and shared in the recognition for the wonderful success that the training had helped one participant produce. But they also dutifully reported the darker side of the picture, that there was a large proportion of the trainees who got nowhere near this outcome and that many trainees made no use of the training at all. It took courage to tell the whole story, but the truth drew attention to the factors that needed to be better managed in order to get more trainees to use their training in positive ways.

By bringing critical attention to the low-level usage of the training and the projected business consequences that would ensue if the strategic shift could not be made, our user-group friends were able to stimulate key executive decisions in some of the business divisions. These decisions would drive more accountability for employing the new marketing skills and more effective manager involvement. The bold actions of these training leaders spawned a new attention to the many factors that drive impact and enabled the entire organization to accelerate strategic execution more deeply throughout the organization.

Courageous Training leaders always dig beneath the results headline and investigate the causes. Why were these great outcomes achieved? Who did what to cause them to happen? What would it take to get more such outcomes from future training? What prevented other people from getting similar great results? Only when the whole truth about training outcomes is reported, understood, and acted on, can the training function dig itself out of the hole of sustained marginal results for all employees.


105

ACCURATELY DESCRIBING THE REAL CAUSES OF TRAINING FAILURE

Training can fail for many, many reasons: bad learning design, wrong program for the audience, people attending being unprepared, a mediocre facilitator, people not having an opportunity to try out their learning, people getting training at the wrong time, and so on. Sometimes the failures are for obvious and relatively easily correctable reasons, such as training not being scheduled at times so the right people can get it, when they need it. Other times, the failures are due to more deeply seated and solution-resistant causes. We recall below a case that provides an excellent illustration of this.


Customer Service Training Case Study
(or, Who Is Minding the Store?)

Our client—one of the largest computer companies in the world— had offered a two-week residential training program to teach field service technicians how to install, initialize, and troubleshoot upgrades to the huge servers the company sold. This was extremely expensive training that took service technicians out of the field for two weeks and engaged them in practice on a multimillion-dollar simulator. The good news from the evaluation was that the training was very, very effective. We uncovered several instances where service technicians made use of their training in highly critical instances with vital customers—one a major airline, and another one of the world’s largest stock exchanges—to avoid computer outages that would have cost many millions of dollars, not to mention a horrific toll on customer satisfaction.

But there was bad news as well. Our evaluation showed that 40% of the trainees never made use of their training, which is a distressingly high proportion especially when the company was106 receiving an increase in complaints about customer service. With a little digging, we discovered the primary cause of the lack of use: amazingly, the 40% of the trainees who never used their training did not use it for the simple reason that they had no customer—not a single one—that owned or had ordered the equipment that the training program taught technicians to install! Why on Earth, we wondered, would district managers with clients already complaining of lack of service take busy technicians out of the field and send them to an unneeded and expensive residential training program?

The answer lay within some deeply rooted and sensitive political divisions in the company. The district service managers did not trust the sales forecasts that came from the sales leaders, who tended to lowball their forecasts so that they would not be punished if they failed to meet them. So as not to be caught short without a qualified technician when a customer needed to have the new equipment installed, these service managers would send a technician to the training “just in case” a new customer with a service need emerged at the last minute. Because service managers were enrolling technicians on this just-in-case basis, the training course was oversubscribed. There was also a waiting list for participation, sometimes lasting several months. Knowing this, service managers were inclined to enroll several technicians, so they could at least get one into the course, which only made the waiting list even longer. The training department, in the spirit of fairness, enrolled technicians on a first-requested/first-served basis, never knowing that there were many technicians on the waiting list who had no need for the course since they had no customers with the equipment.

By doing an evaluation that looked at the total learning-to-performance process (not just the training program itself), the training leader spotted this issue, understood it, and eventually resolved it. This training leader quite courageously exposed the 107problem, calculating and reporting the true costs of the 40% failure rate, the lost service capacity, the costs of customer dissatisfaction due to technicians who really did need the training not being able to attend due to the waiting list, and so forth. Armed with this volatile information, the training leader was able to draw the attention of the highest-level senior leaders and engage them in a “summit” meeting to resolve the problems despite their deeply rooted causes.

At several points in our work with this client, there were temptations to simply sweep the problem under the rug. After all, the training manager was not keen on getting in the middle of the forecasting fray between sales and service. And besides, the service technicians who did get the training all enjoyed it and developed great new capabilities. The ROI data on average across the training program was superbly high and made the training department look good—a major boon to help score points in their continuing budget battle. But our bold client refused these temptations, looking at the problem not only from the point of view of the training function but also from a whole-company business perspective, as a true business leader should. From this vantage point, there was no choice but to take bold steps to report the problem truthfully and drive remedial actions, despite the inevitable political fallout that would ensue from rooting out the true causes for failure. This is another case where traditional evaluation methods would not have brought the critical issue to light.


GETTING INFORMATION TO THE PEOPLE WHO CAN—AND SHOULD—DO SOMETHING WITH IT

Most training evaluation models and methods that we encounter are designed and implemented so that the feedback 108gathered by the training function stays within the training function. A typical Kirkpatrick-based Level 3 practice, for example, is to ask trainees to take a survey some weeks or months after they have been to training. They are asked to report what they are doing or not doing to apply their training. Predictably, most of them are not doing very much with their training. The recipients of this information are the trainers, who (a) already suspect such lack of use is the case, and (b) are the people least able to do anything about the problem. Frequently, however, they are reluctant to share this feedback with line managers or senior managers for fear that it will cast a shadow on the value of the training program and thereby the training department.

In contrast, Courageous Training leaders know that the rightful recipients of Level 3 and other training impact data are the sponsors and customers of training: immediate managers, their bosses, and senior leaders. If good money has been spent giving employees new skills and knowledge that are needed for business success, and these employees are not applying (or are being hindered from applying) their new capabilities on the job, then someone in leadership needs to know. The trick to getting more impact from training is getting the right information to the people who have their hands on the levers that control the factors that keep learning from being used.

Table 6.1 shows clearly that many of the causes for training failure lie in the performance management systems, both formal and informal, that shape employee behavior. From our experience, in almost all cases, the failure of training to achieve maximum possible impact results from the interaction of several possible causes, few if any of which will be within the training department’s direct control.

In many cases, actions can be taken by the trainees’ immediate supervisors to change these negative factors. Employees who 109have recently completed training, for example, will typically face competing job responsibilities that may inhibit opportunities to try out new learning. A straightforward step their managers can take is to structure the trainees’ post-training work responsibilities so that they have such opportunities, check in with them soon after training, and let them know that they, as 110their managers, expect their employees to seek opportunities to apply newly learned training.


TABLE 6.1 Common Causes for Training Failure


Sometimes these performance system inhibitors lie beyond the direct influence of immediate managers. For example, incentive and reward systems may not be aligned with the new ways of working that were taught in the training program. All the training in the world cannot lead to changed behavior if, for instance, the new way of working is going to help the employee “earn” a cut in pay. A thorough and objective evaluation process will ferret out such issues. In these cases, the parties responsible for the system inhibitors need to take corrective action.

In still other cases, the training failure culprits are senior leaders who do not set clear expectations and hold their direct reports accountable for supporting the application of training. In yet other instances, the training impact issues are caused by less obvious and culturally rooted procedures, such as the way that training is valued and perceived in the organization. If training is viewed largely as an employee benefit, for example, then there will also be little expectation for training to be actually applied in on-the-job performance.

When Courageous Training leaders give the right line managers and senior executives good evaluation data in a manner that they can understand and use, good things start to happen. One of our user group members reported a wonderful example of senior leaders taking action on training impact data. The company in this case had launched a new sales strategy, part of which included training new sales reps during a residential two-week program on how to sell more profitable and comprehensive systems (versus single products) to customers. Because the training absolutely had to work, the company built in a number of the Four Pillars methods and tools, including an Impact Booster for managers, and one-on-one meetings between district 111managers and their reps to help prepare the reps to constructively participate in and apply the training. The training leaders followed up the launch of the training with a Success Case Method impact study. The good news they discovered was that the training, when it was applied, led quickly to an increase of 40% in systems sales—the goal of the training. But the training was not being applied by reps in nearly one-third of the districts. The evaluation also looked into which sales managers were doing what to support the training. This inquiry discovered that in almost every district where there was no large increase in sales, the district manager had also failed to conduct the before-training preparation meetings. In the districts where training application was high and the 40% increase was achieved, district managers had faithfully conducted such meetings. Armed with the sales impact data, the bottom-line value of the sales results, what was being spent on the training, and the huge upside results that could be achieved with more consistent training application, the Courageous Training leaders in this company held a meeting to share the data with the senior vice president in charge of all sales. Here, briefly, is a dramatic recreation of part of that meeting, beginning just after the training leaders had walked the senior vice president through the results:

SENIOR VICE PRESIDENT: “Let me see if I have this straight. When my district managers conducted these training briefing meetings you taught them how to do, we got a 40% increase in sales almost 90% of the time. Do I have that right?”

TRAINING LEADERS: “Yes, that’s right.”

SENIOR VICE PRESIDENT: “And when they did not hold these preparation meetings, their districts almost never got the increase?”

TRAINING LEADERS: “Yes, that’s right too.”

112SENIOR VICE PRESIDENT: “And, you’re telling me that even though you taught them how to do it, almost a third of the district managers are not conducting these meetings?”

TRAINING LEADERS: “Yes, that’s right again.”

SENIOR VICE PRESIDENT: “Thanks folks, I’ll take it from here!”

Soon after this meeting, this same senior vice president formulated and mandated a new policy. All new sales reps arriving for the two-week institute would be asked if they and their district manager had engaged in the preparation meeting. If the answer was “no,” the training leaders were to politely refuse to admit the reps into the program and were to send them back to the airport, so they could go back to their home districts. It took only one instance of this dramatic action for the district managers to get the message. The company was dead serious about making the new sales strategy work—and equally dead serious that district managers do everything they could to make it, and the training that was vital to its success, work. Subsequent impact evaluation showed a large increase in the number and effectiveness of these pretraining Impact Map discussions and an equally large increase in systems sales across all of the districts. Importantly, the directive to adhere to the training process came not from the training department, where it would have fallen mostly on deaf ears, but from the key stakeholder in the initiative, a senior business leader.


MAKING A BUSINESS CASE FOR IMPROVING TRAINING—THE COURAGEOUS TRAINING EVALUATION STRATEGY

The bold training leaders with whom we have worked follow a relatively simple but profoundly impactful three-step evaluation113 process called the Success Case Evaluation Method. This process has been described in detail in several other books (Brinker-hoff 2003; Brinkerhoff 2006), so we will not repeat the detailed steps here. But we will provide a general overview of the approach and the type of useful information that it helps Courageous Training leaders uncover. In each step, the training leaders ask a sharply focused set of questions, each set of questions driven by the answers to the questions that preceded them. Then, assuming they have gotten accurate and valid answers to each set of questions, they report their results to the people who can take action, supporting their request for action with a solid business case. Table 6.2 shows the flow of this evaluation process.

This information is collected through a combination of brief surveys and in-depth interviews with participants. Sometimes information is collected from managers or other people who can 114corroborate the behavior and results. Courageous Training leaders begin the data collection process with a clear idea of the behaviors and business outcomes they are looking for and that matter most to the key stakeholders, and they carefully validate the results to be sure they are verifiable and would “stand up in court.”


TABLE 6.2 Courageous Training Evaluation Process


Sometimes answers to the questions in Table 6.2 imply action needed by the training function itself, such as when the design of the learning experience prevented more people from mastering the learning outcomes, or when training delivery schedules or methods impeded participation. In other cases, and most frequently as we have already noted, the answers to the questions will imply actions needed on the part of nontraining personnel, such as managers of trainees, trainees themselves, Human Resources (HR) systems owners, or senior leaders.


SUMMARY

Courageous Training leaders acknowledge and confront the truth that no training works all of the time. They leverage this reality by dutifully seeking out and accurately reporting the business impact of the training, making sure that all the partners in the training process are fully informed. There is almost always good news to report, and this is communicated accurately, but it is not distorted or misrepresented by, for example, using misleading averages and other statistics that can mask the reality of impact. They make sure that the real causes of training success and failure are dug out and that the responsible parties are fully apprised of who is doing what they need to do in order to make things work better. In doing so, the training leaders continue to respect the commitments they have made to be true partners. By being sure that the people who could be most embarrassed 115or endangered by evaluation findings are the first to learn about them, the training leaders are careful not to undermine trust and constructiveness.

In summary, the principal actions that operationalize Courageous Training Pillar #4 are as follows:


  1. Evaluate the whole learning-to-performance process, focusing on application of the learning.
  2. Measure business impact.
  3. Investigate the effect of performance system factors.
  4. Look specifically at management involvement at all levels.
  5. Report the whole truth to the stakeholders; make recommendations for actions to be taken by the persons who have their hands on the levers for change.
  6. Follow through with additional evaluation to document and report progress being made, being sure to acknowledge the good work of partners.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.34.146