Chapter 8

Evaluating Workshop Results

What’s in This Chapter

•  Exploring the reasons to evaluate your program

•  Introducing the levels of measurement and what they measure

Evaluation represents the last letter of the ADDIE cycle of instructional design (analysis, design, development, implementation, and evaluation). Although evaluation is placed at the end of the model, an argument could be made for including it far earlier, as early as the design and development phase and perhaps even in the analysis phase. Why? Because the goals of the training, or the learning objectives (see chapter 5), provide insight into what the purpose of the evaluation should be. In fact, business goals, learning goals, and evaluation of those goals are useful subjects to address with organizational leaders or the training sponsor. Trainers often begin a program without thinking about how the program fits into a strategic plan or how it supports and promotes specific business goals, but these are critical to consider before implementing the program.

However, this chapter is not about that upfront evaluation of the program design and materials; it is about evaluating the program after it has been delivered and reporting the results back to the training sponsor. This form of evaluation allows you to determine whether the program objectives were achieved and whether the learning was applied on the job and had an impact on the business. Evaluation can also serve as the basis for future program and budget discussions with training sponsors.

A Note From the Author

Measuring the impact of any soft skills training can be challenging, even more so with communication skills training because so many factors can affect communication. Start early in the design process to include organizational goals as well as individual learning goals. Clarifying expectations of which specific communication skills are needed in an organization will not only improve your program design but will also help you decide what to measure and at what level to measure it.

Levels of Measurement

No discussion of measurement would be complete without an introduction to the concepts that underpin the field of evaluation. The following is a brief primer on a very large and detailed subject that can be somewhat overwhelming. If your organization is committed to measuring beyond Level 2, take some time to read the classics of evaluation.

In 1956/57, Donald Kirkpatrick, one of the leading experts in measuring training results, identified four levels of measurement and evaluation. These four levels build successively from the simplest (Level 1) to the most complex (Level 4) levels of evaluation and are based on information gathered at previous levels. For that reason, determining upfront at what level to evaluate a program is important. A general rule of thumb is that the more important or fundamental the training is and the greater the investment in it, the higher the level of evaluation to use. The four basic levels of evaluation are

•  Level 1—Reaction: Measures how participants react to the workshop.

•  Level 2—Learning: Measures whether participants have learned and understood the content of the workshop.

•  Level 3—Behavior (also referred to as application): Measures on-the-job changes that have occurred because of the learning.

•  Level 4—Results: Measures the impact of training on the bottom line.

These four levels correspond with the evaluation methods described below.

Level 1. Measuring Participant Reactions

One of the most common ways trainers use to measure participants’ reactions is by administering end-of-session evaluation forms, often called “smile sheets” (for a sample, see Assessment 3: Course Evaluation). The main benefit of using smile sheets is that they are easy to create and administer. If you choose this method, consider the suggestions below, but first decide the purpose of evaluating. Do you want to know if the participants enjoyed the presentation? How they felt about the facilities? Or how they reacted to the content?

Here are a few suggestions for creating evaluation forms:

•  Keep the form to one page.

•  Make your questions brief.

•  Leave adequate space for comments.

•  Group types of questions into categories (for example, cluster questions about content, questions about the instructor, and questions about materials).

•  Provide variety in types of questions (include multiple-choice, true-false, short-answer, and open-ended items).

•  Include relevant decision makers in your questionnaire design.

•  Plan how you will use and analyze the data and create a design that will facilitate your analysis.

•  Use positively worded items (such as, “I listen to others,” instead of “I don’t listen to others”).

You can find additional tips for creating smile sheets and evaluating their results in “Making Smile Sheets Count” by Nancy S. Kristiansen (2004).

Although smile sheets are used frequently, they have some inherent limitations. For example, participants cannot judge the effectiveness of training techniques. In addition, results can be overly influenced by the personality of the facilitator or participants’ feelings about having to attend training. Be cautious of relying solely on Level 1 evaluations.

Level 2. Measuring the Extent to Which Participants Have Learned

If you want to determine the extent to which participants have understood the content of your workshop, testing is an option. Comparing pre-training and post-training test results indicates the amount of knowledge gained. Or you can give a quiz that tests conceptual information 30 to 60 days after the training to see if people remember the concepts. Because most adult learners do not generally like the idea of tests, you might want to refer to these evaluations as “assessments.”

Another model of testing is criterion-referenced testing (CRT), which tests the learner’s performance against a given standard, such as “greets the customer and offers assistance within one minute of entering the store” or “initiates the landing gear at the proper time and altitude.” Such testing can be important in determining whether a learner can carry out the task, determining the efficacy of the training materials, and providing a foundation for further levels of evaluation. Coscarelli and Shrock (2008) describe a five-step method for developing CRTs that include:

1.   Determining what to test (analysis)

2.   Determining if the test measures what it purports to measure (validity)

3.   Writing test items

4.   Establishing a cut-off or mastery score

5.   Showing that the test provides consistent results (reliability)

Level 3. Measuring the Results of Training Back on the Job

The next level of evaluation identifies whether the learning was actually used back on the job. It is important to recognize that application on the job is where learning actually begins to have real-world effects and that application is not solely up to the learner. Many elements affect transfer and application, including follow-up, manager support, and so forth. For example, consider a sales training attendee who attends training and learns a new, more efficient way to identify sales leads. However, upon returning to work, the attendee’s manager does not allow the time for the attendee to practice applying those new skills in the workplace. Over time, the training is forgotten, and any value it may have had does not accrue.

Methods for collecting data regarding performance back on the job include reports by people who manage participants, reports from staff and peers, observations, quality monitors, and other quality and efficiency measures. In “The Four Levels of Evaluation,” Don Kirkpatrick (2007) provides some guidelines for carrying out Level 3 evaluations:

•  Use a control group, if practical.

•  Allow time for behavior change to take place.

•  Evaluate before and after the program, if possible.

•  Interview learners, their immediate managers, and possibly their subordinates and anyone else who observes their work or behavior.

•  Repeat the evaluation at appropriate times.

Level 4. Measuring the Organizational Impact of Training

Level 4 identifies how learning affects business measures. Consider an example related to management training. Let’s say a manager attends management training and learns several new and valuable techniques to engage employees and help keep them on track. Upon return, the manager gets support in applying the new skills and behaviors. As time passes, the learning starts to have measurable results: Retention has increased, employees are demonstrably more engaged and are producing better-quality goods, and sales increase because the quality has increased. Retention, engagement, quality, and sales are all measurable business results improved as a result of the training.

Measuring such organizational impact requires working with leaders to create and implement a plan to collect the data you need. Possible methods include customer surveys, measurements of sales, studies of customer retention or turnover, employee satisfaction surveys, and other measurements of issues pertinent to the organization.

Robert Brinkerhoff, well-known author and researcher of evaluation methods, has suggested the following method to obtain information relevant to results:

•  Send out questionnaires to people who have gone through training, asking: To what extent have you used your training in a way that has made a significant business impact? (This question can elicit information that will point to business benefits and ways to use other data to measure accomplishments.)

•  When you get responses back, conduct interviews to get more information.

Return on Investment

Measuring return on investment (ROI)—sometimes referred to as Level 5 evaluation—is useful and can help “sell” training to leaders. ROI measures the monetary value of business benefits such as those noted in the discussion about Level 4 and compares them with the fully loaded costs of training to provide a percentage return on training investment. Hard numbers such as these can be helpful in discussions with organizational executives about conducting further training and raise the profile of training.

ROI was popularized by Jack Phillips. More in-depth information can be found in the ASTD Handbook of Measuring and Evaluating Training (Phillips 2010).

Reporting Results

An important and often under-considered component of both ROI and Level 4 evaluations is reporting results. Results from these types of evaluation studies have several different audiences, and it is important to take time to plan the layout of the evaluation report and the method of delivery with the audience in question. Consider the following tasks in preparing communications:

•  Purpose: The purposes for communicating program results depend on the specific program, the setting, and unique organizational needs.

•  Audience: For each target audience, understand the audience and find out what information is needed and why. Take into account audience bias, and then tailor the communication to each group.

•  Timing: Lay the groundwork for communication before program implementation. Avoid delivering a message, particularly a negative message, to an audience unprepared to hear the story and unaware of the methods that generated the results.

•  Reporting format: The type of formal evaluation report depends on how much detailed information is presented to target audiences. Brief summaries may be sufficient for some communication efforts. In other cases, particularly programs that require significant funding, more detail may be important.

The Bare Minimum

•  If formal measurement techniques are not possible, consider using the simple, interactive, informal measurement activities found in Learning Activity 24: Informal Evaluations.

•  Empower the participants to create an action plan to capture the new skills and ideas they plan to use. Ultimately, the success of any training event will rest on lasting positive change in participants’ behavior.

Key Points

•  The four basic levels of evaluation cover reaction, learning, application, and organizational impact.

•  A fifth level covers return on investment.

•  Reporting results is as important as measuring them. Be strategic in crafting your results document, taking into consideration purpose, audience, timing, and format.

What to Do Next

•  Identify the purpose and level of evaluation based on the learning objectives and learning goals.

•  Prepare a training evaluation form, or use the one provided in chapter 11.

•  If required, develop plans for follow-up evaluations to determine skills mastery, on-the-job application, and business impact.

Additional Resources

Biech, E., ed. (2014). ASTD Handbook: The Definitive Reference for Training & Development, 2nd edition. Alexandria, VA: ASTD Press.

Brinkerhoff, R.O. (2006). Telling Training’s Story: Evaluation Made Simple, Credible, and Effective. San Francisco: Berrett-Koehler.

Coscarelli, W., and S. Shrock. (2008). “Level 2: Learning—Five Essential Steps for Creating Your Tests and Two Cautionary Tales.” In E. Biech, ed., ASTD Handbook for Workplace Learning Professionals. Alexandria, VA: ASTD Press.

Kirkpatrick, D.L. (2007). “The Four Levels of Evaluation.” Infoline No. 0701, Alexandria, VA: ASTD Press.

Kirkpatrick, D., and J.D. Kirkpatrick. (2006). Evaluating Training Programs: The Four Levels, 3rd edition. San Francisco: Berrett-Koehler.

Kirkpatrick, D., and J.D. Kirkpatrick. (2007). Implementing the Four Levels: A Practical Guide for Effective Evaluation of Training Programs. San Francisco: Berrett-Koehler.

Kristiansen, N.S. (2004). “Making Smile Sheets Count.” Infoline No. 0402, Alexandria, VA: ASTD Press.

Phillips, P.P., ed. (2010). ASTD Handbook of Measuring and Evaluating Training. Alexandria, VA: ASTD Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
35.170.81.33