8

Implementation

When it comes to instructional design and the learners we all work to serve, implementation is the only element of ISD that most of this end-user population really know exists. Learners see this as the course, the online LMS, the assignments, quizzes, handouts, and examinations. As you undoubtedly already know, unless you are involved in course design at some level, you really don’t think about all of the hard work that took place. And frankly, that is how it should be in the eyes of learners. The less they think about the mechanics of course design, the better job we did in designing that course.

Knowing when a course is ready for implementation is a design decision that isn’t as easy as it may seem at first glance. Borrowing an axiom from our friends in project management: You’re never more than 90 percent complete on any project you’re working on. It is the same with instructional design projects. It is time to move on to implementation when a designer has completed a process of due diligence and determined that there is little to be gained by waiting. Holding things up for that next 10 percent improvement might take longer than the total time allocated for the project.

Few full-time instructional designers implement the curriculums they designed. In other cases, a designer may have some evaluative role in early implementations just to make sure the course is going as planned and as designed. As ISD and instructional design become more mainstream in the world of training, it is usually not cost effective to have designers also be facilitators.

Designers not facilitating each new course design may in fact be a good idea for several reasons. Designers who are also facilitators have a tendency to believe that they can improvise or fix missing or faulty design elements on the spot during implementation. This has a tendency to make lesson plans and other deliverables less detailed and therefore less useful to someone who may later facilitate the course. It is also true that designers who facilitate are less likely to change a course design for later implementation. None of these are fatal flaws, but they are definitely something to think about if a designer is a one-person design and facilitation shop.

Evaluation in Implementation

The evaluation of the implementation process may include an evaluation of learners’ impressions of the training (Kirkpatrick’s Level 1) and must include evaluation of mastery of course objectives by learners (Kirkpatrick’s Level 2).

Kirkpatrick’s Levels of Evaluation

Donald Kirkpatrick (1998) broke evaluation into four levels, which have become the benchmark for evaluation and are easy to utilize. While they are usually referred to as levels, these are operationally separate evaluative elements that can be used together or separately. Each of these has specific qualities and fits distinctive needs. Although these levels are linear, designers do not have to use them in any specific order to achieve their evaluation objectives. The four levels of evaluation are:

• Level 1, reaction

• Level 2, learning

• Level 3, behavior

• Level 4, results.

Some authors may exclaim the virtues of Level 5 evaluation, which they call either ROI (return on investment) or ROE (return on expectation). Designers should make their own decisions on how to consider these additional options.

This chapter includes descriptions of Levels 1 and 2, which are essential ingredients of evaluation during implementation. Descriptions of Levels 3 and 4 appear in chapter 9, relating to the evaluation phase of ADDIE.

Reaction, Level 1

Anyone who has ever completed an evaluation that asked for a reaction to a training course probably was responding to a Level 1 evaluation. The most common evaluations at this level are smile sheets, which ask about likes and dislikes. Smile sheets are so common that some people use the term to refer to all evaluations at this level. Other Level 1 evaluations are focus groups, which are held after training, and selective interviews, in which people ask a sample of learners their opinions of training as they leave a program.

The aim of each of these Level 1 evaluations is to discover learners’ reactions to the process. More than anything, Level 1 evaluation provides instant quality-control data. ATD has reported that between 72 and 89 percent of organizations use Level 1 evaluation (Bassi and Van Buren 1999).

A good strategy for Level 1 evaluation is to determine learners’ initial responses to the experience as they exit the training. The freshest and most accurate data for a Level 1 evaluation comes at the immediate conclusion of the training. Every minute that elapses from the end of the training to the reaction from a participant adds to the risk that inaccurate data will be collected. After all, designers are looking for a reaction.

These are typical questions:

• Was your time well spent in this training?

• Would you recommend this course to a co-worker?

• What did you like best?

• What did you like least?

• Were the objectives made clear to you?

• Do you feel you were able to meet the objectives?

• Did you like the way the course was presented?

• Was the room comfortable?

• Is there anything you would like to tell us about the experience?

It is important that designers realize the limitations associated with a Level 1 reaction evaluation. First, it has little if any relationship to whether a learner reached mastery on the content of the course, which is the single-most important evaluation. Second, working toward a high positive on a Level 1 may not be a sound course design criterion. There are some courses that are never going to be enjoyable for learners or content that is not going to be fun to learn no matter how the course is designed. The real shame with Level 1 evaluations is that some organizations evaluate facilitators and hire and fire based on Level 1 results. There are better ways to evaluate facilitators than expecting a reaction evaluation from a learner to be valid enough for this purpose. Use these as they are intended to be used: to determine a learner’s reaction and nothing else. These are not intended to be a peripheral facilitator quality-control vehicle.

Learning, Level 2

For instructional designers, evaluations at the learning level are tied directly to objectives. These are the evaluation tasks that designers develop to match their objectives. Surprisingly, only 29 to 32 percent of organizations use a Level 2 evaluation of behavior (Bassi and Van Buren 1999). This statistic indicates that less than a quarter of all training is evaluated in relationship to objectives, assuming there are any.

The performance agreement, which is covered in depth in chapter 9, goes a long way toward ensuring that objectives are correctly evaluated.

Let’s review the following objective:

Given a realistic role-play situation with sales for the beginner, the learner playing the part of the salesperson should be able to present three reasons why the client should purchase a specific product.

With this objective in mind, the designer then generates an evaluation task:

You have just entered the office of a major client. You have to make a case for buying your top-line product. It is important that you present at least three reasons why the client should purchase your product.

The next step requires the designer to match the key elements of behavior, condition, and degree in the objective and the evaluation task (Figure 8-1).

Figure 8-1. Key Elements in the Performance Agreement

Designers who follow the performance agreement principle of comparing the behavior and condition elements of both an objective and evaluation task will be accomplished Level 2 designers.

Other Elements of Evaluation

During implementation, other elements of evaluation that must be present are:

• evaluation from the perspective of the facilitator

• evaluation of the materials or technology

• evaluation of the environment (room size, arrangement)

• continuity and conformity of implementation with the design plan.

These elements are independent of the level process and have the potential for providing data that suggest changes are necessary. Every aspect of the design is subject to further alteration once implemented. As noted earlier, designers should never consider a project more than 90 percent complete. This means they have a work in progress, not a project that has no hope of redemption. Careful evaluation will provide ample opportunities for tweaks during and after implementation.

Even perfectionists can relax, knowing that everything is a work in progress, including content and materials. They may get the check in the mail for their work or be assigned to another project, but the designs they have worked on are still maturing.

In Conclusion

In ISD, the implementation phase is the time when courses are actually delivered and learners participate in the final design product. Implementation is the sum of all the ADDIE elements and also includes Levels 1 and 2 of Kirkpatrick’s levels of evaluation. This is the phase of ISD that is most familiar to those not involved in the design process and is often assumed by learners to be the only element of the course design process because it is all they ever see as the end user. While this phase is typically not laden with instructional design work directly, it is not unusual for a designer to also facilitate a course or observe facilitation for evaluative purposes.

Discussion Questions

1.  Which do you think is most important to an instructional designer: the evaluation of learners or the evaluation of the instructional design process itself?

2.  Evaluation is a vital element in the implementation phase of ISD. Why do you think that a majority of education and training courses are never reviewed in a way that allows for improvement of the course?

3.  Do you feel that Level 1 (reaction) evaluations are an important aspect of course implementation?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.171.20