Description

Are you a trainer who balks at the idea of testing knowledge during your programs? Do you think it feels too much like school to administer a quiz? Many new trainers struggle to figure out how to assess individual learning within a large group, or how to create an instrument to measure soft skills, like listening or customer service. Developing a knowledge quiz is not something you’ve had to learn to do—until now.

Solutions

Here are some ways to make sure your Level 2 assessments both test and encourage knowledge attainment and performance improvement. Read this challenge on how, generally, to design and deliver tests in conjunction with Challenge 27, which focuses specifically on constructing effective test questions and scoring rubrics.

Eliminate Negative Associations

Asking questions after sharing a concept or an activity—testing—is not merely a way to determine how much has been learned. The “testing effect” is the psychological research finding that testing dramatically alters the learning process itself and promotes long-term retention. In a randomized study by Henry Roediger and Jeffrey Karpicke from Washington University (2006), two groups of college students were asked to study a prose passage for seven minutes. One of the groups then studied the passage for an additional seven minutes. The other group was tested on the passage for those seven minutes. The results are in Figure 26-1. They show that the group that took a practice test initially remembered fewer facts than the group that studied more. But, after two days, and then after one week, the group that had been asked questions about the passage had far better recall than those who had not been asked questions but had been given additional time to review and learn the material.

Figure 26-1. Research Results of Studying More Versus Taking a Test

So, testing is important not just to show that your training hit the mark and that people learned, but also to cement their knowledge and to give people the confidence that they can do it (or get support if they can’t). Knowledge-based courses should be assessed with some kind of test administered individually. Courses that focus on performance can be assessed with either an individual test or a performance assessment. A performance assessment might be a tool you’re already using, such as a role play, that includes a thorough rating scale so that its results can be quantified for each participant. Wherever possible, participants should receive feedback on their test results. That is, reveal the answers or performance you were looking for and let them see how they measured up to those expectations. Don’t just share the correct responses, but make sure to provide explanations of correct responses.

Letting participants know that you are testing them to help increase their knowledge, not to put them on the spot, can help them feel more comfortable and overcome some of their negative associations with tests. They may see themselves as bad students or bad test takers and feel hopeless before they even try it. Reassure participants that what you are doing is meant to help the learning transfer to the workplace, and wherever possible, refer to it as practice instead of a test or quiz. The obvious exception here is when participants do have to pass a certification or completion exam to either gain a particular reward (a bonus, a promotion, to stay in their current role) or avoid a particular consequence (having to retake the training, going on probation).

Make Sure Tests Are Aligned With Objectives

Training should be solidly anchored to the criteria for success on the job through learning or performance objectives. This is referred to as “criterion-referenced instruction”—instruction that is closely tied to desired performance. Criterion-referenced testing is the means for ensuring that the learner has met the objective at an appropriate standard of performance. Both written tests and performance assessments should be criterion referenced. That is, don’t test things that weren’t covered in the training or that the learner won’t encounter or won’t need on the job. Criterion-referenced tests are basically mastery tests that show how well a learner knows the subject matter. The other type of reference test, a norm-referenced test, is one in which learners are compared with their peers. Norm-referenced testing happens primarily in educational settings, not in the workplace.

Test for the Right Behaviors

How will you know that your test or performance assessment is related to mastery on the job? Poll people who are adept at the desired skill, as well as people who observe others expertly or poorly demonstrating the desired skill, about what mastery looks like, and design your instruction and related practice around their responses. Ask what people who are adept at the task do and what those who are not adept fail to do. Ask peak performers what about their behavior makes them effective.

Test for Understanding, Not Just for Recall

What’s ultimately most important is whether learners understood the content, not that they could parrot it back to you. Your tests need to be at an appropriate level of struggle, so they should be associated with tasks that are higher on Bloom’s Taxonomy (see Challenge 17) than just recalling facts. For example, rather than asking learners to name the components of a specific feedback model, like BASIC (balanced, authentic, specific, impact-focused, concise), you can ask them to write a piece of BASIC feedback, or to analyze a piece of feedback to see if it is BASIC (and, if not, to determine which component is missing). Tests should distinguish between who really “got” the content and who was just a good test taker.

Test Throughout

Don’t save your assessments until the end of your program. Use informal, formative methods throughout. That way, if there are issues, you have a chance to fix them. The merits of formative assessment, in combination with formative feedback, have been confirmed through research (Klecker 2007). It makes sense: When learners can receive guidance for improvement as they progress through your content, they will do better than if they are assessed only at the end of a course (a summative evaluation).

Formative assessments—including practices, pop quizzes, role plays, or drawing a concept map—monitor student learning and provide instant feedback. Summative assessments evaluate student learning, skill acquisition, and achievement at the end of a course and can create a “cram and forget” situation. Neither of these approaches should be used exclusively. A balance of formative and summative approaches is the best way to support learning.

Provide Supports

If learners will be able to use supports on the job—like the internet, job aids, or peers—let them use the same supports when completing the practice or taking the test. There is no need, for example, for a participant to memorize a chart to answer questions on your exam if the chart will be posted in their workplace. In this case, your test just needs to make sure they know how to read and use the information found there.

Make Sure Your Test Is Accessible

As with all components of L&D activities, you’ll need to ensure that the test or performance assessment is compliant with the Americans With Disabilities Act, and you’ll need to make accommodations for those who need them, such as extra time to read test questions, someone to read test questions aloud, or questions translated into other languages.

Bottom Line

Well-designed tests during and after your training programs help you measure—and promote—participant learning.

Go Deeper

“Develop Assessments and Tests,” ch. 10 in How to Write Terrific Training Materials, by Jean Barbazette

For a test to be effective, match it to your learning objectives. This is one of dozens of tips in this very practical chapter on developing a reliable and valid test. Examples of poor and better test questions are provided, as well as sample evaluation checklists.

“The Value of Formative Assessment” by Joan Godbout

Educational research indicates that formative feedback enhances learning and is perceived as beneficial by students. This research study suggests 10 characteristics of effective formative assessment, with a number of suggestions for facilitating its implementation. Although the article is geared toward academic scholars, its applicability to workplace learning is clear.

“Evaluation and CRT Development,” ch. 3 in Test Development, by Melissa Fein

Two prevalent evaluation frameworks used in training evaluation, the Kirkpatrick model and logic modeling, are briefly reviewed. Fein suggests, however, that you need not align solely with any one model of evaluation and that there is plenty to learn from any of them.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.214.155