Description

Unless your training function has access to employee and budgetary data, and is able to link that data to specific participants in your programs, the data you have on hand will probably not be what you need. But that doesn’t mean you can toss up your hands and walk away from measurement activities. Accountability is increasing for all organizational functions, not just L&D. As a result, our output is subject to scrutiny. You must do more with measurement to show that your work is having a positive impact on organizational initiatives, even if the data that you have isn’t all that you’d like it to be.

Solutions

Evaluation at Kirkpatrick Levels 3 (behavior) and 4 (results) can help you to show the impact of your work organizationally. Here are some strategies to help you assess at these levels.

Pick the Right Metrics

Before you sit down at your computer to design or develop a training program, you need to know what outcome that program is intended to achieve and what metrics would determine that an improvement has been made. If you really want to be seen as a partner, use the organization’s existing business metrics as the metrics for assessing impact, such as sales, contributions received, the number of customers satisfied with responses from the tech support team, or the number of tech support callbacks. Table 25-1 is a list of some measures you might consider.

Table 25-1. Soft and Hard Data

Soft Data

Open to interpretation, hard to quantify or track, often behaviorally oriented

Hard Data

Objective, easy to quantify, common measures of organizational performance

Work Habits

•  Absenteeism or tardiness

•  Safety or rule violations

Work Climate

•  Employee grievances

•  Turnover

•  Job satisfaction

Attitudes

•  Employee self-confidence

•  Employee’s perception of job responsibilities

•  Perceived changes in performance

New Skills

•  Decisions made

•  Problems solved

•  Conflicts avoided

Development and Advancement

•  Number of promotions or pay increases

•  Requests for transfer

•  Performance appraisal ratings

Initiative

•  Implementation of new ideas

•  Successful completion of projects

•  Number of employee suggestions

Output

•  Units produced

•  Items assembled or sold

•  Forms processed

•  Tasks completed

•  Calls answered

Quality

•  Scrap or waste

•  Rework

•  Product defects or rejects

Time

•  Equipment downtime

•  Employee overtime

•  Time to complete projects

Cost

•  Overhead

•  Variable costs

•  Accident costs

•  Sales expenses

Customer Metrics

•  Number of complaints

•  Number of support call backs

•  Contributions or donations made

Collect a Baseline

Whatever you are going to measure after training, you need to measure before training as well. Otherwise, it’s impossible to prove that the results you see post-training have anything to do with the fact that people completed it. For example, if you hope that the call center employees who were trained handle 10 percent more calls following your sessions, you need to know how many calls they were handling prior to training. In examples like that one, your baseline can be tied to actual numbers. In some cases, your baseline will be more anecdotal. For instance, upon registration in a coaching program, ask participants to estimate what percentage of their time in one-on-one meetings with their staff members is spent on development and coaching. Then ask them again immediately and three to six months after the program.

Another inexact technique for measuring the impact of training is a trend-line analysis. This is usually looking at some measure over time and then pointing to where in the timeline training happened to see if there are any significant dips or upticks surrounding the training event. Trend lines require a historical record of what’s being measured, such as the number of customer complaints or product errors, before and after the training.

Use What Data You Can Get

Don’t let a lack of data stop you from reporting on something. Maybe you want to know how many people you put through a sales program actually increased their sales compared to those who hadn’t gone through the program, or how many of the managers you trained on leading organizational change had more successful change initiatives in their areas. These data isn’t always easy to get or readily available. Even when your ideal data can’t be secured, there is usually something you can report on.

Here are some of the ways the ROI Institute, presenting at ATD’s Core 4 conference, suggests that you collect data about whether your participants are applying their new knowledge, skills, or attitudes on the job (the percentages indicate how frequently organizations are currently utilizing these methods):

•   Participant survey (74 percent). Ask those who participated in the program to self-report, using a questionnaire, on how they are applying what they learned back at work.

•   Job observation (60 percent). Spend time in the participants’ workplace following your program to see whether the information you imparted is being utilized.

•   Supervisor surveys (44 percent). Ask supervisors to respond to a questionnaire about how those who participated in the program are utilizing their new knowledge, skills, or attitudes.

•   Action planning (39 percent). Compare the plans that participants created during your session with what they have actually accomplished after some period of time back on the job.

•   Performance records monitoring (37 percent). Compare participants’ performance data from prior to training with their performance data following training.

•   Participant interviews (34 percent). Ask those who participated in the program to self-report on how they are applying what they learned on the job using interviews or focus groups.

If nothing else is possible, soliciting self-reports from the people who participated in the program, or their managers, provides at least some anecdotal data you can share. Your questions might first be about job performance following training, like the ones in Table 25-2.

Table 25-2. Anecdotal Data From Learners and Their Managers

Ask the Learners Ask Their Managers

•  About how much of what you learned in this course do you think you have applied on the job: 0%, 25%, 50%, 75%, 100%

•  In the three months since your training, have you used X skill?

•  If you did use X skill, what were the results?

•  If you did not use X skill, why not?

•  What follow-up from this training would be useful to you right now?

•  Did the employees change their behaviors following training? Has this change been positive? Has this change been sustained?

•  How have employees’ performance improved compared with that of employees who did not attend training?

•  What has gotten in the way of these employees utilizing the KSAs from training?

And then your questions might try to clarify how much of that current behavior is a direct result of training. The ROI Institute suggests questions like these, which can help you weigh participant and supervisor responses in your calculations of value:

•   Which factors, including—but not limited to—training, might have created the on-the-job improvements you’ve noticed?

•   As a percentage, how much of the improvement is due to each factor?

•   As a percentage, how confident are you in your estimate?

Set Targets for Each Metric

It’s not enough to just report numbers. Both you—and the senior leaders whom you will be reporting to—need to know what numbers mean and whether they are reasonable. What numbers are in line with competitors’, or with industry benchmarks? What numbers make your results valid? For example, your organization may consider a 25 percent drop in customer complaints about a certain process, following training on that process, to represent a true success. What number are you striving for? Is that number realistic?

At NYU, we also had targets for how many people would evaluate our online offerings (and those who did evaluate programs were entered into a quarterly raffle, just to help us get to that target), as well as what the minimum overall satisfaction score should be, based on data a vendor shared with us about what standard satisfaction scores were for online learning.

Having targets for the number of responses you expect helps you to get creative in coming up with ways to get to that rate—from prizes (carrots), to not marking completion of courses until learners submit an eval (stick)—and helps ensure that you aren’t just hearing from those whose experiences with the course were the most extreme. Having minimum targets for the percentage of passing or satisfaction scores creates some context for the data you are sharing, such as, “Our goal for this training was to have 80 percent of participants pass with a score of 80 of above. Eighty-nine percent passed with a score of 82 or above.”

Help your organizational leaders, who will want to set all targets for improvement at 100 percent, to come up with more realistic expectations.

Get Your Math Right

If you are going to report regularly on your outcomes, which will help to communicate your value, that data needs to be reliable and consistent. This means you have to establish some principles on your team about how data will be collected and recorded. For example:

•   When analyzing data, select the most conservative alternative for calculations. For example, if you have two competing counts of how many people completed a certain online class, use the lower number.

•   Round up or round down at 0.5.

•   If no improvement data is available for a population, assume that little or no improvement has occurred.

•   If there is enough evidence that a participant answered incorrectly (for example, a participant checked all the responses in the far right column going down the page, even though the rating scale for that column flipped a third of the way through—the column started out meaning “strongly agree,” but flipped to “strongly disagree” after a few questions), discard that participant’s responses, as they were likely circling what they thought was one type of response without reading the questions.

•   Drop outlier scores (those on the extremes).

Beyond agreement on these overall principles, you’ll need to know how any metric that you present was calculated. You may damage, rather than enhance, your reputation if you cannot answer a senior leader who asks, “How did you get that number?” Know how your numbers were calculated and use the same calculation and principles month-over-month and year-over-year, so that your results are reproducible and solid.

A basic example of how discrepancies in data can affect your reporting comes from two trainers at one organization I worked with, as shown in Table 25-3.

Table 25-3. Discrepancies in Data Reporting

Would you recommend this course?

Yes: 20 people

No: 5 people

Skipped question: 10 people

Trainer 1 Trainer 2

Took the number of people who responded positively (20) and divided it by the number of people who answered this question (25) and reported that 80% would recommend the course.

Took the number of people who responded positively (20) and divided it by the number of people who responded to the survey (35) (because if you skipped the question, you weren’t recommending it). Reported that 57% would recommend the course.

It isn’t as important which number these two trainers reached as it is that they decide whose calculations to use, and use them consistently across programs, courses, and time periods. Document how you compute your metrics somewhere that can live independently of you, in the event that you are not around when someone is making these calculations.

Conduct Higher Levels of Evaluation Selectively

If getting the metrics you need is a real challenge in your organization, reserve doing so for your larger training initiatives. Conducting Level 4 evaluation makes the most sense when the program is extremely costly and you want to justify the expense, when a large number (or all) of your employees went through the program, or when the program was implemented to address a specific organizational challenge and you need to show how training has contributed to the resolution of that challenge.

Bottom Line

More and more organizations are data driven. If training is to be a current and value-added organizational function, we need to be held to the same standard.

Go Deeper

“Business Results Made Visible: Designing Proof Positive Level 4 Evaluations” by Ken Phillips

According to a 2009 ROI Institute research study, the number-one thing CEOs would most like to see from their learning and development investments is evidence of Level 4 business results, yet only 8 percent of CEOs receive this type of information. This article details how to select business metrics that have a strong relationship with learning program content and how to connect the learning program to the business metrics. This article and several other fantastic resources related to program evaluation are available to anyone who signs up, for free, as a member at this site.

“How to Collect Data” by Malcolm Conway

L&D professionals need data to determine more than whether courses provide the required learning: They need data to determine what learning is required and what level of learning employees already possess. This Infoline presents an overview of a five-step model for data collection developed by Greg Kearsley of George Washington University, from identifying your data needs and how you will fulfill them through collecting and validating the data. Strengths and weaknesses of some common types of research are included in a helpful grid.

“Make It Credible: Isolate the Effects of the Program,” ch. 10 in Value for Money, by Patti Pulliam Phillips et al.

This book is not focused exclusively on training events, and it can get extremely technical and analytical, but it is thorough and insightful. The review of the myths surrounding why we don’t try to isolate the effects of our programs in this chapter is incredibly helpful.

“Kirkpatrick Foundational Principles” on the Kirkpatrick Partners website

The steps to achieving return on expectations, or ROE, are based on the Kirkpatrick Model for evaluating training programs with the four levels used in reverse. This is only one of the many articles on the Kirkpatrick Partners website that can help you to implement the four levels of evaluation in your workplace. The tab for “The New World Kirkpatrick Model” can help ensure that you are applying the levels based on current thinking and data.

ROI Learning Center

The ROI Learning Center provides tools, templates, and case studies to help prepare you for success in today’s environment. Its goal is to take the mystery out of the ROI process. After signing up for a free membership, you’ll have access to webinars, articles, and templates to assist at five levels of evaluation that the center has identified, along with a calculator to help you calculate ROI when you have the appropriate data.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.144.170