CHAPTER 11

Why Bother With Assessment and Evaluation?

Needs assessment? Analysis? Why bother? It seems like a lot of work. It takes too much time. I don’t know how to design a needs assessment. Evaluation? Smile sheets? Reaction forms? Happy sheets? Whatever they are called. Aren’t they under attack by the performance people? Who needs that trouble? I don’t have time. My supervisor doesn’t support evaluations. There are so many obstacles that get in the way of even the best intentions of evaluating training.

Whoa! Hold that thought. Let’s look at assessment, evaluation, and the relationship between the two. Then we’ll determine if we should bother or not.

What’s a Needs Assessment?

A needs assessment usually starts the training cycle and planning to determine needs or gaps between the current and the desired condition. (Remember the discussion of the training cycle in chapter 1?) In a training setting it generally refers to a gap in employee performance. It does not necessarily mean that training is the solution. A needs assessment is an effective tool to clarify problems and even identify solutions.

Controversy arises when it is called a training needs assessment. Some believe that adding the word training diminishes the value of the needs assessment because it assumes a desired solution (Triner, Greenberry, and Watkins 1996). I don’t know about you, but the artist in me refuses to get hung up on a word. If the assessment shows the gap can’t be closed with training, then address the real problem.

The classical approach is to determine the discrepancy between the desired and actual knowledge, skills, and performance using a variety of methods, including interviews, observations, questionnaires, and tests. These methods involve gathering, analyzing, verifying, and reporting data. The discrepancy may translate into a training need or lead to something else. For example, a needs assessment in human performance improvement is used to discover needs related to performance issues, such as processes, resources, or organizational structures. A front-end needs assessment serves several purposes:

• It places the training need or request in the context of the organization’s needs. Training adds value when it serves a business need.

• It validates or dispels the initial issues presented by a sponsor. Although sponsors and clients know the business, they don’t always know the cause of or the remedy for issues that involve human performance.

• It ensures that the ultimate training design supports employee performance and thereby helps the organization meet its needs.

• It results in recommendations regarding nontraining issues that may be interfering with the achievement of desired organizational and employee goals.

• It establishes the foundation for back-end evaluation.

Needs assessment may occur at three levels: organizational, task, and individual.

• An organizational assessment evaluates the level of organizational performance. It may determine the skill, knowledge, and attitude needs of an organization. It also identifies what is required to alleviate an organization’s problems and weaknesses, as well as enhance strengths and competencies. Organizational assessments consider factors such as changing demographics, political trends, technology, or the economy.

• A task assessment examines the skills, knowledge, and attitudes required for specific jobs and occupational groups. It identifies how and which occupational discrepancies or gaps exist, as well as examining new ways to do work that could fix those discrepancies or gaps.

• An individual assessment analyzes how well individual employees are doing a job and determines their capacity to do new or different work. It provides information on which employees need training and what kind.

PLAN YOUR NEEDS ASSESSMENT

Ask questions that will provide the best information. Determining what questions to ask is one of the most important considerations you have when conducting a needs assessment. Start with the end in mind—what do you want to accomplish. In doing so, think about results at the higher levels that are more costly and strategic, as well as what is of interest to the organization and management (Phillips and Phillips 2016). Consider these questions as you plan your needs assessment:

• Who is being trained? What are their job functions?

• Are they from the same department or from a variety of areas or locations in the organization?

• What are the deficiencies? Why has this occurred? What is it costing the organization or the department?

• What kind of data do you need?

• What are the backgrounds and educational profiles of the employees being studied?

• What do employees expect? Desire? What do managers expect? Desire?

• What are you trying to accomplish with the needs assessment?

• How will the results of the needs assessment benefit the organization?

• What are the expected outcomes? What effect will these outcomes have on which organizational levels?

• Which data gathering method will work best: questionnaires, surveys, tests, interviews?

• Who will administer the assessment—in-house or external consultants?

• Will the analysis interrupt work processes? What effect will this have on the workforce and productivity?

• What is the organizational climate?

• Will there be a confidentiality policy for handling information?

There isn’t much controversy about conducting needs assessments. You will of course want to ensure that your instrument is reliable and valid. And quite honestly, I don’t spend too much time worrying about that unless I need the analytics for another comparison or use. Use common sense.

ADDIE: A to E

ADDIE, A to E, or is it? We typically think of the acronym to represent analysis, design, development, implement, evaluate. That puts evaluate at the end—the last thing we think about. Absolutely not!

Although evaluation is the final step in ADDIE, in actuality, it starts the design and delivery process. That’s because it provides the details that allow you to establish goals and continuously improve the training session. Knowing how participants have been affected by the training program gives you the data necessary to determine what worked, what didn’t, and what changes may be necessary to be more effective. Anyone will tell you that you need to establish goals so you can measure results. That means you’d better put it up front so you know the goal. I believe that “evaluate,” the E, belongs in every step.

In analyze: Clarify the goal; the business result requires evaluation.

In design: Determine which questions will be useful to evaluate each level.

In development: Validate and evaluate instructional plans.

In implement. Evaluate at Level 1 and at times Level 2, along with ongoing evaluations.

In evaluate.

What would a new model with this much emphasis on evaluation look like? AeDeDeIeE. Evaluation should be in your hip pocket ready to grab throughout the entire process.

The Evaluation Process

How clear is your organization about the value of evaluation? It certainly isn’t due to a shortage of information or models. Books are published every year. Conferences always feature evaluation topics and niche conferences. Articles are written (and we presume read), classes are taken, and electronic resources are searched. What’s the problem? Evaluation needs to become an integral part of the training cycle. The information must be seen as an important part of the process. No matter where your organization stands in the evaluation debate, as a trainer and facilitator you can do your part to disseminate information about its importance.

Your plan for evaluation should begin soon after the needs assessment is complete. Specifying evaluation criteria is straightforward; the primary training needs are used to identify both class objectives and training outcomes (Goldstein and Ford 2002).

The Beginning of Evaluation for Training

In November 1959, Don Kirkpatrick wrote and published articles about evaluation based on his PhD dissertation in the ASTD Journal. He described evaluation in four words, currently referred to as the Kirkpatrick Four Level Evaluation Model: Reaction, Learning, Behavior, and Results. He emphasized that all four levels are important, especially if the purpose of training is to get better results by changing behavior. In their book Implementing the Four Levels, Don and his son Jim write about the “importance of evaluating the four levels, or as many as you can” to “build a compelling chain of evidence of the value of learning to the bottom line.” They emphasize the importance to present the value of training in a way to “maximize the meaningfulness to the hearts and minds of your internal stakeholders,” (Kirkpatrick and Kirkpatrick 2006). A vast majority of organizations use the Kirkpatrick model today (Salas et al. 2012).

In the 1970s Jack Phillips wrote his original work on training evaluation. He called for the training community to move beyond Level 4 to include a financial accounting of program success, return on investment (ROI). ROI shows the monetary benefits of the impact measures compared to the cost of the program. The value is usually stated in terms of either a benefit-cost ratio (ROI is a percentage), or the time period for payback of the investment. One of the advantages of ROI is that it can more readily demonstrate how training supports and aligns with the organization’s business needs.

The Value of Evaluation

Evaluation is important, it takes time, and it is essential. Professional trainers build in time to measure results to ensure everyone in the training program knows what needs improvement, what kinds of assistance participants may need after they return to their jobs, and what obstacles still exist that prevent transfer of training. So what are the four levels?

Level 1: Reaction—Learner attitudes toward the training opportunity, such as a satisfaction with involvement or what was learned.

Level 2: Learning—Knowledge and skills learned, such as being able to state best practices or having new skills for tasks on the job.

Level 3: Behavior—Changes in execution and implementation of skills learned and practiced on the job.

Level 4: Results—Quantifiable results that demonstrate the impact training has on the organization.

Expanding the Four Evaluation Levels

Don Kirkpatrick’s original work has evolved into the New World Kirkpatrick Model, which Jim and Wendy Kirkpatrick (2016) published in Kirkpatrick’s Four Levels of Training Evaluation. The revised Kirkpatrick model incorporates a return on expectations and emphasizes identifying results (Level 4) up front, and underscores the need to focus on the collective efforts of the training department, supervisors, and senior leaders to accomplish return on stakeholder expectations. The process provides indicators of value through a holistic measurement of both qualitative and quantitative measures from a program or initiative, with formal training typically being the cornerstone.

To achieve return on expectations, learning professionals must ask questions to clarify and refine the expectations of key business stakeholders and convert those expectations into observable, measurable business or mission outcomes. Focusing on return on expectations is like turning the Kirkpatrick model on its head, because you need to ask questions and start with Level 4 first. These questions might include:

• What will success look like to you?

• What will participants do with what they learn?

• How will this training effort affect our customers?

• What impact will participant development have on our bottom line?

• How will you know we are achieving the goal?

• What is your goal?

In addition, the New World Kirkpatrick Model emphasizes the need to partner with supervisors and managers to encourage them to prepare participants for training and to play a key role after training by reinforcing the learned knowledge and skills. You also identify indicators that the training is achieving the objectives. It is understood from the beginning that if the leading indicator targets are met, the initiative will be viewed as a success. These steps are all completed during your design.

For example, if your company was providing training for sales staff, you might ask these questions:

• What skills do you want employees to perform?

• What desired outcomes should the company see due to these new skills?

• What indications will you have that the new skills will produce the results you desire?

• What will your customers experience based on these new skills?

• What data will you use for these measures?

These sample indicators could be used at each of the four levels.

Level 1: Reaction

At Level 1, you would determine whether:

• Participants are satisfied with the relevancy of the training.

• Participants believe the training program covers the required objectives.

• Participants agree the trainer encourages participation and questions.

• Participants want more opportunities to role play.

Level 2: Learning

At Level 2, you would determine whether:

• Participants are able to state the features and benefits of the company’s top-selling products.

• Participants are able to use research results to reinforce benefits.

• Participants can demonstrate how to quickly bridge from their opener to the main topics of the sales call.

• Participants know the difference between fact questions and priority questions.

Level 3: Behavior or on-the-Job Application

At Level 3, you would determine whether participants can:

• Describe how a product meets a specific need.

• Establish rapport with customers during their sales calls.

• Plan their use of visuals prior to sales calls.

• Use more open questions than closed questions in their sales calls.

Level 4: Business Results

At Level 4, you would determine if:

• Participants have increase sales each week by at least 2 percent over a four-week period.

• Customers have reported increased satisfaction with the service received from representatives.

• Participants are making more sales calls than in previous weeks.

• Customer retention has increased by 10 percent over the prior year.

A thorough evaluation of your training program might involve all four levels. Each level of evaluation has value in its own right, but it is the combination of evidence that truly assesses how effective the training has been.

 

“Logic will get you from A to B. Imagination will take you everywhere.”

—Albert Einstein

 

More About Evaluation

Is your training evaluation guided by what matters or by what is easiest to measure? As you examine the four levels of evaluation in more detail, think about what you should be measuring at each level.

Level 1: Reaction

The most common kind of training evaluation is Level 1. It is easy, fast, and inexpensive when compared to evaluation efforts at other levels. When your program has ended, you will naturally want to find out if the training met the participants’ expectations. Try to obtain a picture of participants’ reactions to the training program as a whole, as well as a sense of their response to the various parts. What questions might you ask?

• Rate the following on a 1-5 scale:

a. I feel prepared to use what I learned.

b. The program was relevant to what I need on the job.

c. The training program has motivated me to implement what I learned.

d. The program achieved the stated objectives.

e. The trainer encouraged participation and questions.

• What did you find most useful in the program?

• What do you still need to be successful on the job?

• What suggestions do you have to improve this program?

Evaluate Yourself With the 4 Cs

Evaluating you and your results helps you determine whether you are doing your job (Silberman and Biech 2015). When evaluating learning and development opportunities, I believe you can use the 4 Cs to evaluate how well you are doing.:

Competence. Do you deliver content in a way that ensures their competence?

Commitment. Do you inspire commitment to return to the job and implement what was learned?

Confidence. Do you instill confidence in them so they will be successful?

Customer Service. Do you satisfy the learner and your client?

Translating the 4 Cs

The first one is easy and logical. We are supposed to improve our learners’ knowledge, skills, and attitude. That’s competence. But if we do not also increase learners’ commitment to change and confidence that they can change, they will not implement the new skills, performance will not change, and everything will remain the same. You must provide enough influence that the participant will return to the job and put into practice the skills and knowledge that were delivered during the learning session. Do the following example questions evaluate the 4 Cs?

a. I feel prepared to use what I learned (competence and confidence).

b. The program was relevant to what I need on the job (customer satisfaction).

c. The training program has motivated me to implement what I learned (confidence and commitment).

d. The program achieved the stated objectives (customer satisfaction).

e. The trainer encouraged participation and questions (engenders confidence).

Think about the 4 Cs when you design your Level 1 evaluation process.

Level 2: Learning

Besides finding out how participants viewed the training program, you need to know what knowledge, skills, and attitudes (KSAs) they acquired. The most common way to measure learning is the use of tests. However, it’s not easy to construct a test that is both reliable and valid: A reliable test is one that gets similar results time after time. A valid test measures what it purports to measure and not something else. So, take time to get feedback about your test items and pilot them before using them with your actual participants. With the objectives of active training programs in mind, be sure to go beyond testing factual recall. See if participants can state the information in their own words, give examples of it, and apply it in a variety of situations. Without this information, it is impossible to determine whether the training achieved real learning as opposed to memorization.

To further substantiate that test results prove that learning occurred, it is desirable (although not always practical) to test participants’ KSAs before the training begins as well as after. This gives you a baseline that makes it easier to state the changes that occurred as a result of the training program.

In addition to testing, you can obtain evidence that learning has taken place by doing any of the following:

• Performance on job-relevant tasks at the end of the training program, usually involving in-basket exercises, games, or case studies.

• Interviewing participants to see how they would respond to job-related problems.

• Assignments that require participants to integrate what they learned.

Don’t overlook the value of asking participants directly about what they learned and how it will be applied. Self-report does not constitute proof but it provides some indication that learning occurred. The simplest approach is to use a questionnaire or interview to ask participants questions such as:

• What tools, skills, or ideas do you now have that you did not have at the beginning of this program?

• What have you learned that you can put to immediate use?

• What have you already practiced outside the session?

• What intentions or plans do you have as a result of the program?

• What do you want to learn next?

Level 3: Behavior

The easiest way to assess whether your training led to on-the-job application is to survey or interview participants once they return to their jobs. Questions to consider include:

• How have you used the things you learned in the training program?

• How has the training program helped you perform better at work?

• What specific steps can you take to continue or improve the use of the skills and knowledge you learned from the training program?

Although self-reports of participants may have value, there is every reason to confirm that participants are actually doing what the training objectives were seeking. Post-training performance may be evaluated by observing their actual job performance or obtaining feedback from supervisors or other key people, such as customers. You can try a procedure that combines participant self-report and supervisory feedback. You might do this by asking participants to complete a follow-up form at the end of a program that contains statements about how they plan to implement the training. Then, in three to four weeks, send another follow-up form to participants and their supervisors for evaluation. Level 3 data can also be obtained by:

• performance appraisals of participants

• on-site observation of sample participants on the job

• checklists of key behaviors that are completed by unbiased observers

• interviewing supervisors.

Examples of Level 3 measures, listed by Barksdale and Lund (2004), include:

• process measures (for example, participants follow a new business process back on the job)

• productivity measures (for example, participants’ errors are reduced)

• cost measures (for example, participants identify methods to reduce costs)

• revenue measures (for example, participants increase their referrals to other products)

• safety measures (for example, participants follow safety procedures).

One of the challenges of Level 3 evaluation is to provide evidence that the participant’s behavioral changes were a result of the training program and not the results of other factors. This is difficult to “prove.” However, if this is a requirement for you, consider the following questions in a Level 3 study:

• How many people were in the study? (The more people involved in the study, the more confidence in the results.)

• Was there a control group? (If people who were not involved in the training program also changed, the training program may not have been necessary.)

• How large was the performance improvement? (Small changes in behavior may not warrant the time and expense of the training program.)

• Was the improvement sustained? (Behavior occurring immediately after the training may not continue several months later.)

Level 4: Results

Assessing the business results of training may be the most difficult to do. Level 4 evaluation may require time-consuming activities, such as focus groups, strategic interviews, and observation. However, if you use the New World Kirkpatrick Model, your stakeholder will tell you what “results” they expect. It could include things such as a reduction in turnover in a specific department, increase in sales, or decrease in absenteeism. In addition to asking for stakeholder expectations, data may already exist in your organization. Consider these sources of data:

• employee engagement surveys

• organizational and team morale scores

• number of customer complaints

• employee retention; lost time

• sales revenue

• cost per sale

• safety ratings

• customer service ratings

• work flow and efficiency data

• awards from outside sources

• operating costs

• compliance versus violations

• accuracy studies

• consistency

• product defects.

These data sources influence the organization’s bottom line by increasing sales, profits, revenues, market share, and so forth, but also affect organizational effectiveness indicators such as employee morale. The danger of the bottom-line definition of impact is the tendency to reduce everything to financial terms, which may trivialize many nonfinancial but important measures.

Of course, evidence from these kinds of Level 4 data doesn’t constitute proof that the training was responsible for organizational results. If you are using an ROI model you will also use control groups. Evaluation guru Jack Phillips recommends additional alternatives such as trend-line analysis; participant, supervisor, and expert estimation; or input from customers and subordinates (Phillips and Phillips 2016b). The more evidence you provide, the more clout your Level 4 or ROI claims will have.

Return on expectations emphasizes that formal training events by themselves do not deliver significant bottom-line outcomes. It does not emphasize using estimates, assumptions, or empirical financial formulas to try to isolate the effects of training. However, it does acknowledge that the results always come from a variety of factors, and if you do not measure and know which factors aided in the transfer of learning to behavior and subsequent results, you will not be able to take any credit for the success. The return on expectation method does not attempt to isolate outcomes related to training, but it does place an emphasis on validating that the transfer of learning to on-the-job behavior is related to training.

ROI

Typically ROI objectives set the acceptable level of monetary benefits versus costs of the program. They may be expressed as a return on investment percentage, a benefit-to-cost ratio, or a time for payback. Examples of ROI include (Phillips 2016):

• Achieve at least a 20 percent return on investment within the first year.

• Achieve a 2:1 benefit-cost ratio.

• Realize an investment payback within six months.

Finally, remember that for trainers, evaluation of a training program is not only about outcome but also about process. Evaluation efforts should address what is happening in a training program as much as whether that program is making any difference. Why is process evaluation so important? Quite simply, without good records of what happened during a training program, it is not always clear what needs to be changed if the evaluation outcome is disappointing. Try to keep a log of the events during the learning program, how participants responded, and your own reactions. Or invite others to observe the training session and make observations about the program as it is being experienced. By doing so, you will be an active participant in your program.

The Purpose of Evaluation

I listed several purposes of assessments earlier in this chapter; now let’s examine potential purposes of evaluation. To increase the impact of training evaluation practices, clarify the purpose for evaluation and tailor subsequent decisions about what and how to evaluate the training (Salas et al. 2012). Identifying the purpose of the evaluation increases the likelihood that data will be accepted, eliminates wasted time, and increases the acceptance of training in the organization. The purpose of evaluation is likely to include several of these:

• Determine business impact, the cost-benefit ratio, and the ROI for the program: What was the shift in the identified business metric? What part of the shift was attributable to the learning experience? Was the benefit to the organization worth the total cost of providing the learning experience? What is the bottom-line value of the course’s effect on the organization?

• Improve the design of the learning experience: Evaluation can help verify the needs assessment, learning objectives, instructional strategies, target audience, delivery method, and quality of delivery and course content.

• Determine whether the objectives of the learning experience were met and to what extent: The objectives are stated in measurable and specific terms. Evaluation determines whether each stated objective was met. Nevertheless, knowing only whether objectives were met isn’t enough; a practitioner must know the extent to which objectives were met. This helps focus future efforts for content reinforcement and improvement.

• Determine the content’s adequacy: How can the content be more job related? Was the content too advanced or not challenging enough? Does all of the content support the learning objectives?

• Assess the effectiveness and appropriateness of instructional strategies: Case studies, tests, exercises, and other instructional strategies must be relevant to the job and reinforce course content. Does the instructional strategy link to a course objective and the course content? Is it the right instructional strategy to drive the desired learning or practice? Does the strategy fit with the organization’s culture? Instructional strategies, when used as part of evaluation, measure the KSAs the learning experience offers.

• Provide feedback to the facilitator: Did the facilitator know the content? Did the facilitator stay on topic? Did the facilitator provide added depth and value based on personal experience? Was the facilitator credible? Will the evaluation information be used to improve the facilitator’s skills?

• Give participants feedback about what they are learning: Are they learning the course content? Which parts are they not learning? Was there a shift in knowledge and skills? To what extent can participants demonstrate the desired skills or behavior?

• Identify the KSAs being used on the job: What parts of the learning experience are being used on the job? To what extent are they being used?

• Assess the on-the-job environment to support learning: What environmental factors support or inhibit the use of the new knowledge, skills, attitudes, and behaviors on the job? These factors could be management support, tools and equipment, or recognition and reward.

Training Was a Success—or Was It?

Evaluation data provide a great deal of information, but can’t tell you everything. For example data can tell you if participants learned the content, but cannot tell you whether they were encouraged to implement the content on the job.

Will Meets Level 1

Rumblings about the limitations of the smiley sheet have been making headlines for years. It appears that there are three specific concerns by those who are opposed to using Level 1 evaluations, which include:

• We don’t measure the actual performance (Shrock and Coscarelli 2007).

• Level 1 evaluations are typically completed at the end of the class, when the Ebbinghaus forgetting curve is at its peak, suggesting that learners are at their peak performance. Certainly this is not where they will be after going home and on to work the next day. (See the sidebar on page 233.)

• Scientific research conducted by two separate teams in 1997 and 2008 showed that there is no correlation between the responses on Level 1 evaluations and learning results (Alliger et al. 1997; Sitzmann et al. 2008). Nor were Level 1 evaluations good predictors of future performance.

In his book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, Will Thalheimer (2016) addresses smiley sheets head-on and declares all-out war on them. He is accurate in stating his observations. And I do agree I’ve seen some pretty bad Level 1 and Level 2 evaluations. But that’s the designer’s fault—not the fault of the model. Thalheimer also tries to make a case for what he calls a “Performance-Focused Smile Sheet.”

However, it’s important to recognize that Level 1 evaluations were never intended to measure or even predict performance. If they were, they would be called “Performance,” but instead they are called “Reaction.” Creating an evaluation step to predict performance is a great idea, but let’s use Level 1 to evaluate exactly what it was intended to measure—learners’ reactions.

Could Level 1 evaluation be improved? Yes, but let’s not create something for which it was never intended to be used. Instead, perhaps another level should be added—Level 1.5?

THE FORGETTING CURVE

The problem is no surprise to you. Participants forget information they have been exposed to during your training experiences. This dilemma was hypothesized and studied by German psychologist Hermann Ebbinghaus in 1885. He discovered that information is exponentially forgotten from the time learners consume it. The graph shows how rapid this loss occurs (Ebbinghaus 1964; Loftus 1985).

Testing or Examining—What’s the Difference?

In their book Telling Ain’t Training, Harold Stolovitch and Erica Keeps (2011) discuss the difference between tests and exams in one of their chapters. Since this is closely related to Level 2 evaluation, here are a few highlights:

• The word test creates stress for both high and low performers. Try to avoid it and use practice check or game instead.

• Testing is a natural part of training because learners want to try out new skills.

• To make testing most successful, deliver feedback with the results.

• Frequent testing or verification is best. It increases attention, engages the learner, and consolidates learning.

• Start with the performance objective for any test item.

• Use written or oral tests for declarative knowledge; use performance tests for procedural knowledge.

Tests, or shall I say practice checks, are a great way to learn.

The Art of Using Evaluation Results

Evaluating what you do is critical, but it doesn’t always happen. As a professional, you have a responsibility to build evaluation into every training event. You need to measure results at every point and design evaluation at the beginning. Pair evaluation measures during design and development using both quantitative measures (Kirkpatrick’s Four Levels or Phillips’s ROI) and qualitative measures (Kirkpatrick’s Return on Expectations or Brinkerhoff’s Success Case Method).

Earlier in this chapter I suggested that ADDIE ought to be AeDeDeIeE. You can model the need for continuous evaluation by implementing some of these throughout the process:

Refer to your objectives when developing your evaluation.

• Learn the art and science of tying results to the bottom line.

• Share results with your clients and your participants.

• Use a flipchart for a quick evaluation at the end of a short session—draw a large T-grid with the words works well on one side and need to improve on the other side. Then ask for ideas from your participants.

• Allow for anonymity to get the most honest answers.

• Keep a copy of your evaluations. Use the ideas and suggestions they contain to improve your future performance and program designs.

• Pilot all tests prior to using them with participants.

• Meet with supervisors following the training to determine how much of the learning was transferred to the workplace.

• Work with key leaders to identify how training affects the bottom line.

• Consider conducting a focus group with key supervisors if a training session will be repeated. It can tell you what skills seem to be transferring to the workplace and which ones are not.

Investment in training is assumed to have positive returns. The evaluation of training is inherently a good thing, but how can you be certain that the investment of training dollars is a wise decision? Can you clearly define the return on investment? Actually, you can’t. Allocating funds for training in comparison to other investments is often done on quasi-faith. Yes of course, employees must be developed. They must learn new skills and improve performance. But when tight budgets are at stake, the lack of an exact science makes it difficult to invest money in training and development. One study states that research and analysis regarding the cost analysis for training is effectively absent (Fletcher and Chatham 2010).

Tying It Together

I started this chapter with a discussion of assessment. Assessment and training evaluation should be tied together. A critical purpose in evaluation is to determine whether learning objectives (based on the assessment) were achieved. Even more important is to determine whether accomplishing the objectives resulted in enhanced performance on the job. Since learning is usually multidimensional, including KSAs, determining whether objectives were achieved usually requires different types of measures (Kraiger 2002; Kraiger et al. 1993).

As I suggested earlier in this chapter, an effective design is one that considers the role of evaluation in every ADDIE step. Following a needs assessment, identify both training objectives and evaluation criteria (Goldstein and Ford 2002).

One final word: When designing your evaluation, use precise affective, cognitive, and behavioral measures that reflect the intended learning outcomes (Salas et al. 2012). Coming full circle, these should be based on the objectives as a result of your needs assessment.

What We Know for Sure

Science tells us that we can rely on several proven facts:

• Kirkpatrick’s Four Levels of Evaluation is the model that is used more often than any other.

• Needs assessments and evaluation processes should be linked.

• The four levels each evaluate a different set of expectations.

• Start with Level 4 to measure return on expectation and get buy-in from stakeholders.

• Tests are a good way to learn and remember content.

• Controversy over evaluation will most likely always exist.

• Identifying the purpose of the evaluation increases the likelihood that data will be accepted, eliminates wasted time, and increases support of training in organizations.

• The best results occur when you use needs assessment data to develop both objectives and evaluation criteria that are related.

• Success is more likely if your evaluations are written using precise, cognitive, and behavioral measures that reflect the training objectives.

The Art Part

Your success will depend upon how well you adapt to the situation and your learners’ needs. Tap into some of these ideas to help your learners grow, to develop yourself, and to add your personal creative touch.

Survey monkey. Consider asking participants to download an evaluation while in the session and email it to you. Keep in mind, however, that there are two drawbacks to doing so. Emailing responses decreases anonymity. And waiting until after the session will dramatically decrease the number of responses you receive—often by more than half—unless there is a reward for completing it. ATD holds participants’ certificates until after the evaluation has been completed. If you prepare ahead of time you could email everyone in the session a link to a provider such as SurveyMonkey, which eliminates the issue of anonymity.

Obtain feedback along the way. You may not want to wait until the end of your session to learn about your participants’ needs and satisfaction. Design your training program to obtain feedback and data on an ongoing basis so you can make adjustments before it is too late. Even observing participants’ behavior gives you clues about their satisfaction. Do they smile? Do they seem alert? Involved? Do they ask questions?

Behavioral barometers. Behavioral cues are often good evaluation barometers; however, they give you incomplete feedback. You could fill in the gaps by guessing how the participants think and feel, but although these guesses might be accurate, they may be influenced by your fears (if you are anxious) or your ego (if you are too confident). Verifying your impressions is the only way to obtain accurate and detailed feedback.

Add the range. The next time you average scores on your Level 1 evaluations, be sure to add the range to show the high and the low spread of numbers.

Art and Science Questions You Might Ask

These questions provide potential challenges for your personal growth and development:

• Are your training evaluation results linked closely to business results?

• How well are you meeting your customers’ expectations?

• How informed are your senior level managers about the results your department achieves?

• When doing a needs assessment, identify the advantages and disadvantages for conducting it in-house or using a consultant from the outside.

• How well do your evaluations measure what your organization requires?

• What data do you have that measures your value to the organization?

• How many of the questions on your Level 1 evaluation will be used to make decisions?

• What can you measure to demonstrate that learning has occurred?

• What kinds of measurement will show that behaviors have changed on the job?

• What kinds of information will your stakeholders want to see that demonstrates the effectiveness of training?

Why Bother With Assessment and Evaluation?

When done right, the planning for a needs assessment and evaluation begin about the same time. The evaluation systems currently used are constantly under fire. Despite developments in evaluation theory and practice, many models still rely on judgments. Both needs assessments and evaluation are viewed as less objective and not as value-free as we would like (McNamara, Joyce, and O’Hara 2010). Why bother? Well, until a better solution comes along, the system can still address our needs:

• Level 1, Reaction as an immediate customer satisfaction. There is plenty of research that says we need to motivate, build relations, and meet the learners’ needs. That deserves reaction time.

• Level 2, Learning to ensure that knowledge and skills were gained. Remember that you need to go beyond delivering competencies. You need to deliver a heathy dose of commitment and confidence too.

• Level 3, Behavior that shows there was a performance change. We are concerned that the learners are making changes on the job after the session ends.

• Level 4, Results demonstrating a business impact. This is the information your stakeholders care about.

• ROI for those few times when you are measuring a key or long-term project and you need to have more exact numbers—a return on the investment.

It’s up to you to ask the right questions.

Perhaps we do need a new process. As I was expanding what I already knew about evaluation, I came across a new 10-step model. Just what this fast-paced, complex, uncertain world needs—more complexity and more to do! Perhaps you will create a new system. And if you do, I challenge you to keep it practical, useful, simple, and functional.

Resources

Alliger, G., S. Tannenbaum, W. Bennett, H. Traver, and A. Shotland. 1997. “A Meta-Analysis of the Relations Among Training Criteria.” Personnel Psychology 50:341-358.

Barksdale, S., and E. Lund. 2004. “How to Differentiate Between Evaluation Levels 3 and 4.” In Training and Performance Sourcebook, edited by M. Silberman. Princeton, NJ: Active Training.

Biech, E. 2008. ASTD Handbook for Workplace Performance. Alexandria, VA: ASTD Press.

———. 2014. ASTD Handbook: The Definitive Reference for Training and Development. 2nd ed. Alexandria, VA: ASTD Press.

Brinkerhoff, R. 2003. The Success Case Method. San Francisco: Berrett-Koehler.

———. 2006. Telling Training’s Story. San Francisco: Berrett-Koehler.

Ebbinghaus, H. 1964. Memory: A Contribution to Experimental Psychology. New York: Dover (Originally published, 1885).

Fletcher, J.D., and R.E. Chatham. 2010. “Measuring Return on Investment in Military Training and Human Performance.” In Human Performance Enhancements in High-Risk Environments, edited by J. Cohn and P. O’Connor, 106-128. Santa Barbara, CA: Praeger/ABC-CLIO.

Goldstein, I., and J. Ford. 2002. Training in Organizations: Needs Assessment, Development, and Evaluation. 4th ed. Belmont, CA: Wadsworth.

Kirkpatrick, D., and J. Kirkpatrick. 2006. Evaluating Training Programs: The Four Levels. 3rd ed. San Francisco: Berrett-Koehler.

Kirkpatrick, J., and W. Kirkpatrick. 2016. Kirkpatrick’s Four Levels of Training Evaluation. Alexandria, VA: ATD Press.

Kraiger, K. 2002. “Decision-Based Evaluation.” In Creating, Implementing, and Managing Effective Training and Development: State-of-the-Art Lessons for Practice, edited by K. Kraiger, 331-375. San Francisco: Jossey-Bass.

Kraiger, K., J. Ford, and E. Salas. 1993. “Integration of Cognitive, Skill-Based, and Affective Theories of Learning Outcomes to New Methods of Training Evaluation.” Journal of Applied Psychology 78:311-328.

Loftus, G.R. 1985. “Evaluating Forgetting Curves.” Journal of Experimental Psychology: Learning, Memory, and Cognition 11(2):397406. doi:10.1037/0278-7393.11.2.397.

Mager, R.F., and P. Pipe. 1997. Analyzing Performance Problems. 3rd ed. Atlanta: Center for Effective Performance.

McNamara, G., P. Joyce, and J. O’Hara. 2010. “Evaluation of Adult Education and Training Programs.” In International Encyclopedia of Education, edited by P. Peterson, E. Baker, and B. McGraw. Duzel, E. 2006. “Novelty Aids Learning.” www.physorg.com/news73834337.html. Dublin City University.

Phillips, J., and P. Phillips. 2016. Real World Training Evaluation: Navigating Common Constraints for Exceptional Results. Alexandria, VA: ATD Press.

Phillips, P. 2010. ASTD Handbook of Measuring and Evaluating Training. Alexandria, VA: ASTD Press.

———. 2016b. Handbook of Training Evaluation and Measurement Methods. 4th ed. New York: Routledge.

Pollock, R., A. Jefferson, and C. Wick. 2014. The Field Guide to the 6Ds: How to Use the Six Disciplines to Transform Learning Into Business Results. San Francisco: John Wiley & Sons.

Salas, E., S. Tannenbaum, K. Kraiger, and K. Smith-Jentsch. 2012. “The Science of Training and Development in Organizations: What Matters in Practice.” Psychological Science in the Public Interest 13(2): 74-101.

Schacter, D.L., D. Gilbert, and D. Wegner. 2011. Psychology. 2nd ed. New York: Worth Publishers.

Shrock, S., and W. Coscarelli. 2007. Criterion-Referenced Test Development: Technical and Legal Guidelines for Corporate Training. 3rd ed. San Francisco: Wiley & Sons.

Silberman, M., and E. Biech. 2015. Active Training: A Handbook of Techniques, Designs, Case Examples, and Tips. Hoboken, NJ: John Wiley & Sons.

Sitzmann, T., K. Brown, W. Casper, K. Ely, and R. Zimmerman. 2008. “A Review and Meta-Analysis of the Nomological Network of Trainee Reactions.” Journal of Applied Psychology 93(2):280-295.

Stolovitch, H., and E. Keeps. 2011. Telling Ain’t Training. Alexandria, VA: ASTD Press.

Thalheimer, W. 2016. Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form. Minneapolis: Work-Learning Press.

Triner, D., A. Greenberry, and R. Watkins. 1996. “Training Needs Assessment: A Contradiction in Terms.” Educational Technology 36(6):51-55.

Watkins, R., M. Meiers, and Y. Visser. 2012. “A Guide to Assessing Needs: Tools for Collecting Information, Making Decisions, and Achieving Development Results.” Washington, DC: World Bank.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.160.61