CHAPTER 9

Evaluation

KEY CONCEPTS

• Evaluation is the ISD phase where all the design and implementation processes and products are evaluated

• Conducting evaluation for the analysis, design, development, implementation, and evaluation phases of ISD

• Conducting evaluation of objectives to ensure quality and for mastery determination

• Degree of difficulty as a necessary aspect of evaluating content rigor

• Performance agreement and the fit between objectives and evaluation tasks

CHAPTER OBJECTIVES

At the end of this chapter, the learner should be able to:

• Describe at least two uses for evaluation in the:

∘ Analysis stage of ISD

∘ Design stage of ISD

∘ Development stage of ISD

∘ Implementation stage of ISD

∘ Evaluation of a course or project

• Provide at least three reasons for evaluating the design of behavioral objectives.

• Explain why designers should care about the degree of difficulty in objectives.

• List the elements of the performance agreement principle.

• Provide at least two examples of the use of psychometrics in the evaluation process.

In the past, evaluation was sometimes an afterthought and sometimes completely ignored in courses that didn’t require examinations for academic or licensure reasons. There was a commonly held feeling that simply sitting through a course was enough. Designers always knew this wasn’t true, but now almost every course has to have a rigorous evaluation process in place, and it has to be more sophisticated than asking a learner if they liked participating.

This demand for precise and reliable evaluative documentation requires designers to engage in a much more detailed look at every facet of the design process and product. Designers must be concerned about three different but related areas that are monitored to design and deliver quality courses and projects. Specifically, we work with learner-centered, facilitator-centered, and design process evaluations. This is the evaluation trilogy in instructional design (Figure 9-1).

Figure 9-1. Aspects of Evaluation

Learner-centered evaluation is the more traditional role of evaluation in learner mastery, with the influence of Donald Kirkpatrick’s work and his evaluation levels. Facilitator-centered evaluation looks at the facilitator’s role in moving learners to mastery. In design process evaluation, we are focused on the actual process and deliverables related to course development. All of these are evaluation, but they require different skills and different tools to be effective.

As an example of how evaluation works in this environment, let’s look at the development of a single course. First, you have the design and other ISD elements that go into developing the final course, which is then implemented. Once the course is implemented, the second part of evaluation comes into play during the course as you determine mastery of each learner. Both are evaluations, but the information you are evaluating is completely different.

This chapter will examine how evaluation has become more than learner related; it is now about evaluating the design process and even the evaluation process itself.

Kirkpatrick’s Four Levels of Evaluation

One of the most important contributions to the work of instructional designers as it relates to evaluation was made by Donald Kirkpatrick (Figure 9-2). Starting with his PhD dissertation in 1954, Kirkpatrick was one of the first wave of instructional designers, including Robert Gagné and Robert Mager, to establish science-based theory and practice in ISD. This former president of the American Society for Training and Development (now the Association for Talent Development) gave the field the most logical and enduring model of evaluation.

While most refer to Kirkpatrick’s model as the four levels of evaluation, in reality they are four areas of evaluation. Just like ADDIE and other models, the seemingly linear appearance of Kirkpatrick’s evaluation model, based on using the terminology of levels in his work, is not consistent with how a designer works with evaluation in practice. Kirkpatrick’s four levels of evaluation (1994) are:

Figure 9-2. Kirkpatrick’s Levels of Evaluation

• Level 1: Reaction

• Level 2: Learning

• Level 3: Behavior

• Level 4: Results

Level 1: Reaction

When we look at the first level of evaluation as instructional designers, we are trying to determine how a learner reacted to a specific course or program. Did they enjoy it? Do they feel they learned something? Did they like the facilitator and would they recommend it to a friend? These are the typical types of data being sought with this evaluation. These evaluations in their distribution mode are often called smile sheets, and it is easy to see why. Was there a collective learner smile at the end of the course?

Instructional designers have always held a rather mixed view of the usefulness of these types of evaluations. One view is that any information is valuable as a designer and that even this type of affective feedback has some usefulness in evaluation and future adjustments of course properties. The other view is that these evaluations have little if anything to do with mastery and therefore have little durable value in design. Both points of view have merit.

Instructional designers have always held a rather mixed view of the usefulness of Level 1 evaluations.

The smile sheet–supportive designer will argue that having data on environmental issues like learner perception of facilitators, learning environments, and depth of commitment to their feelings, with questions, like “Would you recommend this to a friend?” is valuable. Many facilitators feel that these evaluations unfairly and certainly unscientifically evaluate their effectiveness based on arbitrary post-implementation emotional reactions. Some organizations rely entirely on this type of evaluation when it comes to hiring, retaining, and promoting facilitators.

As with any aspect of instructional design, it is important to place the information and data in the context of what it offers in terms of evaluation. Universal adoration or dislike of an instructor tells us something, but what? It is possible that a very talented facilitator was given a horribly designed course to implement and no learner is going to feel good about the experiences. Always evaluate data across the entire spectrum of available information and don’t just make a judgement based on one very vertical data point.

Level 2: Learning

The heart and soul of instructional design rest in learner mastery, and the evaluation of mastery is critical in the practice of professional-level ISD. Learning evaluations can take many forms; we looked at this extensively in the chapter on objectives since it is impossible to measure learning accurately without objectives. The instructional designer is responsible for making sure that mastery is measured based on the expected outcomes for a course or project. This type of evaluation must be objective, without any subjectivity in either the process or the provider.

While most instructional designers consider it mandatory to implement and review evaluations of learning, the majority of organizations still don’t require this type of evaluation. Academia, applied sciences, apprenticeships, and other credentialed and certified programs are almost universally required to have documented evaluations for learners to retain their institutional licensing and authority.

Level 3: Behavior

The most important question Level 3 seeks to answer is, “Did the training stick?” How much of the training transferred from delivery to the workplace? In many organizations and course design projects, this evaluation is never actually implemented to determine behavioral change. The most recent study showed that between 11 and 12 percent of training is evaluated for behavioral change (Bassi and Van Buren 1999). This is not surprising given the focus on the immediacy of course implementation and results. However, there are many times when Level 3 evaluations should be part of a design approach, and the more a designer can make the case for performing this evaluation, the more likely it is to be actually used in practice.

There are several ways to conduct Level 3 evaluations that any designer can add to a toolkit. Surveys and observation are two powerful ways to evaluate at this level. The thing to remember about this level of evaluation is the behavior. Did the behavior move to the workplace? If a designer’s objectives are written well, they contain half of what they need to be used in the design of a course. The other half is to select a way to measure where participants start and where they are when designers measure long-term results.

Surveys and observation are two powerful ways to evaluate at Level 3.

Other Evaluation Questions

Designers who are interested in seeing if participants can meet the training objectives will evaluate learning, whereas those who are interested in seeing if performance has improved will measure behavior. Both of these can be satisfied with evaluation.

For example, evaluations would differ for a course on the use of new software for entering orders in a retail sales environment. A designer is interested in finding out if the course had any impact. Accurate data is available on how long it took to complete a transaction before the training with both the new and old software. At regular intervals, the designer accumulates new data on how long it takes to complete a transaction and compares the numbers. The designer can easily see any difference in time to see if the training had any impact and how much.

Designers who want to find out how much of the training objectives learners can still meet can sample a representative number of participants with the formal evaluation task used during the course. They then compare the scores on an individual or group basis and do the math. This method will go a long way toward evaluating if the content, as well as the instructional and delivery methods, were the best choices for the project.

In situations in which the evaluation is not so simple (with soft skills or affective domain courses, for example), designers can interview or survey participants and gauge the participants’ opinion of their ability to meet the objectives. If possible, designers can also retest a sampling of the participants.

There are three basic reasons participants may lose the ability to meet objectives after the course, each of which tells the designer something important about the course. The reasons are covered in detail here.

Participants Never Learned the Skill or Concept

On those occasions when learners fail to achieve mastery of course content, the designer will see this in the Level 3 evaluation of the course objectives. It is typically the case that a large percentage of learners in this category points to errors in implementation or design. The first place for the designer to revisit is the course design and make sure that the population truly fits the content level and course design choices.

It is possible that the facilitators who carried out the implementation may have done a poor job or didn’t follow the lesson plan as provided. It might be that the participants ignored the prerequisites for the program. It is also possible that the evaluation tasks were either ignored or compromised to the point that participants were never evaluated at all. The lack of an evaluation sets up a scenario in which neither participant nor facilitator can really tell if the objectives are being met.

It can be as simple as poor performance agreement or a sloppy lesson plan structure supporting acquisition of content. It is also prudent to review motivation and attitude if objectives are not being met across the cohort population.

The Skill or Concept Was Never Retained

Problems with retention may come from any number of issues. The most common problems are too much content in too short a time or a lack of any supportive materials or methods after the conclusion of the course. It is also possible that the content had no meaning or importance to the participants. Ownership of the content is important if participants are to retain information for any length of time. Ownership necessarily implies content and course design that allow that to happen.

The Skill or Concept Was Never Used After the Course

Designers who determine that participants had no opportunity to use the skills or concepts sometimes face issues beyond their control. They may train 300 learners to be motorcycle mechanics, but if nearly all of them end up in sales, the training will not stick. These kinds of issues are especially important in psychomotor and cognitive objective domains. Yes, people may be able to ride a bike after many years of no practice, but how many years of practice have they had to support those skills?

Level 4: Results

A Level 4 evaluation is about bottom-line results. What was accomplished? Did the training pay off? Were the expected or promised results accomplished? This level of evaluation has also drawn more than a few skeptics because inflated claims of return on investment (ROI) have sometimes entered into the process and driven many to question any claimed results.

Probably less than 3 percent of training is evaluated for results (Bassi and Van Buren 1999). Make no mistake about it: Figuring out results can be a tricky and sometimes expensive undertaking. One reason for this is that the value of results can be both monetary and societal. Although the impact on an organization can be calculated with some degree of certainty, the impact on a community is tough to measure and is largely subjective.

However, no one should discount the power that training can have for change in a community. A poison prevention course, for example, is community based, and the impact could be literally lifesaving, a true Level 4 result.

Learner Evaluation

If learners are able to reach Level 2 mastery of the content in significant numbers, you should probably feel confident that the course design itself is reasonably solid. A low percentage of content mastery would indicate an issue somewhere in the course design, and it would then be prudent to review the course design for obvious problems. The second immediate concern relating to learners and evaluation is a review of the Level 1 results to see if there are any issues with the way the course was perceived by the learners. Both levels of results are important to a designer, as well as sponsors, facilitators, and others, since these are the most easily recognized evaluative tools in ISD.

After a course or program has been implemented, and the immediate evaluations of mastery and reaction have been administered and reviewed, a designer should consider the use of an evaluation of learning effectiveness after a period of time has elapsed. These are sometimes considered a Level 3 or behavior evaluation since they are used to see if any changes in learner behavior related to the course objectives were still present after course completion. Organizations that are looking for specific improvements like fewer accidents or increased productivity will find these evaluations extremely valuable.

In the practice of professional ISD it is important to go deeper into the relationship between the facilitator and learner mastery.

Facilitator Evaluation

One of the most important elements of learning is the role that the facilitator plays in moving learners to mastery. This is typically measured indirectly by looking at learner mastery or how a population in a course feels about the experience. In the practice of professional ISD it is important to go deeper into the relationship between facilitator and learner mastery; we can look at it through several different measures. It is possible that learners can be successful with even the worst facilitator, and without an objective way to look at this aspect of the learning equation, designers miss an opportunity for evaluation that does not exist in any other environment. For our purposes, we will look at facilitator evaluation from these four areas:

• Credentials

• Teaching style

• Course structure

• Effectiveness

Credentials are the demonstrated ability of a facilitator to teach a specific subject area. This might include educational credentials like a degree or certification. This area also includes prior demonstrated experience in the content area. This is important in apprenticeship and other teaching areas that are not reliant on formal educational credentials.

Teaching style relates to the ability of a facilitator to connect with a population of learners in a course. This might include the ability of a facilitator to create an encouraging learning environment or an online community to support learning. Other elements would be ethical treatment of all students and professional behavior within the learning environment.

Course structure is one area that relates directly to a facilitator’s ability to deliver a course that is well designed and focused on ISD principles and practices. The facilitator must demonstrate behavioral objectives and observable and measurable evaluation of mastery.

Effectiveness is tied directly to the measurement of mastery within the learning population. If a course doesn’t deliver on the expected level of mastery across the population, this would be revealed in a facilitator effectiveness evaluation.

REFLECTION

Level 3 evaluations are less likely to be conducted than Level 1 or 2 evaluations. This is because they generally require more time and resources that some consider unnecessary. They are also often conducted during a period that is likely to be outside the original training budget window, so there may be no budget to do them. This is why a designer must be prepared to talk about this early in the design process if it seems necessary to suggest a Level 3 evaluation for a project.

As an instructional designer, what aspects of the design process do you think should be evaluated as an essential element in the practice of ISD?

Design Process Evaluation

Evaluation of the design process, separate from the learners’ mastery, informs the designer about a host of other issues. Evaluations and feedback should be solicited from everyone who is not an instructional designer but was involved in the course design itself, which at a minimum includes facilitators, subject matter experts, graphic artists, programmers, photographers, and even the clients themselves. While the course may have gone well relative to learner mastery and reaction, this level of evaluation may reveal an entire list of other issues that in some ways are more important to future work. This might include a perception of less than acceptable communications, lack of involvement in decision making, and even late payment on invoices. Facilitators will also offer incredibly important information concerning the implementation and learner interaction issues. They are also experts on the materials and flow of the course design who provide data you just can’t get anywhere else.

Evaluation during all project elements provides the quality control mechanism that ensures an honest and meaningful snapshot of both process and product. It is almost impossible for a project to be considered valid without a comprehensive evaluation strategy that goes beyond looking at the issues associated with learners and facilitators; the process itself must be examined. Analysis, design, development, and implementation all have evaluation needs that designers should include in their projects. Let’s look at some other evaluation concerns in ISD.

Evaluation During Analysis

During the analysis phase of course design, it is important to make sure a designer has covered all the necessary analytic areas required for a project. This may be a checklist of areas in which data must be gathered and reviewed. In other projects, evaluation at this stage ensures that when assets are available for gathering data, everything is covered and there isn’t a need for a second trip to the analytic well, which always costs money and time that can be better spent elsewhere.

This is an example of an informal checklist of questions that designers need to find answers for in the analysis process. In actual practice, a list might be many times more detailed or even somewhat simpler, depending on the project.

Evaluations and feedback should be solicited from everyone who is not an instructional designer but was involved in the course design itself.

Checklist for Evaluation of Analysis Process

   Is this an issue or a problem that can be completely fixed by training alone?

   Is this an issue or a problem that can be improved by a training program?

   Have you gathered all the data (enough data) concerning:

  Population

  Subject matter

  Organizational goals

  Learner goals and needs

  Logistics

  Resources

  Constraints

   Have you reviewed your analysis results with:

  Stakeholders

  Subject matter experts

  Target population sample

  Other designers

   Have you compared your findings against other internal or external benchmarks?

   Have you double-checked all of the above?

Evaluation During Design

Design-phase evaluation is critical to the success of a project. Designers have little chance for success if they allow a flawed instructional design to move forward to development and implementation. Objectives, evaluation tasks, and all the critical elements of course design take shape in the design phase, and they need to pass some level of evaluation. Evaluations here address problems early and save time and money as a result.

The value of design phase evaluations is that they enable coordination of information among all those working on a project. For designers working on their own, it is important to have someone review their design phase work because it is easy for designers to lose focus when they become glued to the process. A quick evaluation of both product and process is an absolute necessity.

Information from even the best analysis can go astray in the hands of a technical writer or designer. A SME is the best resource to use to check that the content is correct and clear. A SME’s review can prevent embarrassing errors from occurring when the course is rolled out.

The value of design phase evaluations is that they enable coordination of information among all those working on a project.

Designers need to ensure that the following evaluations take place:

• Review of all the design plan elements by the SMEs and at least one other designer

• Review of all objectives and evaluation tasks by the SMEs and at least one other designer

• Review of evaluation strategy and materials

• Review of all draft participant materials

• Review of all draft facilitator materials

• Review of all draft media

• Review of everything by the decision makers

• Sign-off on everything

Evaluation of Design Elements

The designer must incorporate evaluation components to assess the validity of the objectives and the extent to which the objectives correlate with the desired behavior and the process employed for learning-level (Kirkpatrick Level 2) evaluation, that is, performance agreement.

Evaluation of Objectives

The first step in the process is to identify each component in the objective. Following the recommendations made in other chapters, designers should scrutinize the four elements of a learning objective—audience, behavior, condition, and degree—and rate each element from one to 10 on the basis of how well it is written.

Two examples follow. The first objective is not written well, and the evaluation of it shows where the problems exist. The second objective is much better and reflects good instructional design practice.

The first objective says:

At the end of this course, the learner will know about radar.

The components of the objective are as follows:

• Audience: the learner

• Behavior: will know about radar

• Condition: at the end of this course

• Degree: not available

Here is a suggested rating:

• 5 for the audience statement

• 5 for behavior

• 3 for condition

• 0 for degree

The designer would then calculate the quality of the objective. First, the ratings would be added for a total of 13, then divided by four to result in a number between 1 and 10. For this objective, the score is 3.25 out of a possible 10. It is not very good, but it gives an idea of what the course is about.

The next objective uses language more successfully:

Given four hours in the classroom and two hands-on exercises, the Weather Radar Repair participant should be able to describe without error the five basic operational modes for a model WSR 88-D radar unit.

• Audience: the Weather Radar Repair participant

• Behavior: should be able to describe the five basic operational modes for a model WSR 88-D radar unit

• Condition: given four hours in the classroom and two hands-on exercises

• Degree: without error

The audience rates a 10 because it is not possible to have more information unless you name the students individually. The behaviors are a 10 because the objective states clearly what the participant is expected to do. The condition is a little weak because it could include materials, so give it an 8. The degree is clear enough to deserve a 10. The score on this objective is 9.5, much better than the first objective’s score of 3.25.

This use of a 1–10 rating system may appear subjective, but a system can be developed that will apply to different designs and provide great value to the evaluation process.

Degree of Difficulty of Objectives

The term degree of difficulty does not refer to how difficult objectives are to write, but to how difficult they are for the learner to meet. An evaluation is important regardless of the complexity or age group a designer is working with. There are several reasons designers are concerned about difficulty. First, the level of difficulty in a series of objectives ensures placement from easy to hard in the design. The level may not be obvious unless the designer rates the objectives. Second, designers need to be aware of difficulty to assure themselves that they are challenging their learners at the level at which their analysis shows the learners can both absorb and synthesize. Third, if they are evaluating another project, they need to make sure that the level and sequencing of objectives are consistent with the project’s goals.

Designers use the verb in each behavior for rating the difficulty because it is the heart of the objective.

Designers use the verb in each behavior for rating the difficulty because it is the heart of the objective. The verb shows that the designer is asking a participant to do “something” and that something is associated with a particular level of difficulty. Designers should rate the difficulty the same as they do the objectives, using a scale from one to 10. Consider, for example, ratings for these action verbs:

• “List” is not very difficult, so it will be a 3.

• “Apply” is more difficult and deserves a 5.

• “Critique” is a 10 because it is more difficult than the first two.

These ratings are just an example because there are contexts in which a designer may classify “apply” or “critique” as less difficult than “list.” Subtle differences between items, for example, may make it hard to list them in certain orders; a lesson may be so clear that it is easy to apply it; or the merits and demerits of certain items may be so obvious that criticism comes easily.

If these three behavior verbs were in one module, a designer would want to order them from easy to hard. There are exceptions, such as when a designer wants to start with a more difficult concept or skill and then move to easier objectives. But use your design skills to write and sequence your objectives according to their level of difficulty.

Performance Agreement

Performance agreement is the relationship between behavior and condition elements in objectives and evaluation tasks. The link between the two is critical to ensuring that the performance stated in the objective is in agreement with the performance in the evaluation task.

Performance agreement is comparable to motion pictures in that one of the most important jobs on a movie set is known as continuity. The person who handles continuity makes sure the filming and the sequence of the final product matches the script.

Similarly, the designer must make sure that objectives are written correctly and that the evaluation task supports the objective in behavior and degree. The designer facilitates this process by checking performance agreement.

Here is an example of an objective and its evaluation task:

• Objective: Given a car and a filling station, the Fueling the Car participant will fill the car without spilling any gas.

• Evaluation task: You have just stopped at a filling station. Fill the car completely without spilling any gas.

This example has performance agreement since the behaviors, conditions, and evaluation task match one another in objective domain and other issues (Table 9-1).

Table 9-1. Performance Agreement Example

 

Objective 1

Behavior

Match

Conditions

Match

Evaluation task

Match

Objective domain

Psychomotor

Following is an example without performance agreement:

• Objective: Given a defibrillator and a stethoscope, an intern working with a doctor at the hospital in the Internal Medicine Rounds program should be able to perform cardiopulmonary resuscitation (CPR) on a patient during a “code blue” emergency without error.

• Evaluation task: You are performing rounds with your assigned doctor when a “code blue” is called in the next room. The nurse calls out that it appears to be a heart attack. You and the lead physician hurry to the room and determine that it is, in fact, a patient with no pulse. The lead physician orders you to perform CPR while she finds the defibrillator. In 500 words, describe how you would perform CPR.

Here the behavior in the objective and the evaluation task do not agree, as shown in Table 9-2.

Table 9-2. Example of Missing Agreement

 

Objective 1

Behavior

No Match

Conditions

No Match

Evaluation task

No Match

Objective domain

Psychomotor/Cognitive

The conditions somewhat match. The behaviors do not match since performing and describing are two vastly different behaviors and two different objective domains. In this case the mismatch could prove life threatening.

To fix this missing agreement, the designer could either rewrite the objective so that the behavior says “should be able to describe” or change the evaluation task to say “perform CPR on the patient.”

It is a good idea to check performance agreement for all your objectives, even if the consequences are not life threatening.

Evaluation in the Development Phase

Evaluation is also important in this phase, when many critical decisions are made that can greatly affect the success of your project. Designers must make sure their evaluation plan is ready for the pilot testing of the project. Issues that typically come up as they pilot test are segment timing, deficiencies in materials, lack of clarity in the course structure, and failure to design for the target population. A dozen other minor things may arise as well.

Segment timing is sometimes the hardest task for a designer. Differences in facilitators, equipment, and materials affect timing. A designer should allow for the possibility that any variable may affect timing. It is usually a good design strategy to add extra time. It is also valuable to time several run-throughs of a segment and average the time for the design. It is common to find deficiencies in materials during pilot testing. These problems can range from typographic errors in the copy to offensive graphics or wording. Sometimes simple issues, such as having the materials in the right language, come into play. Just when a designer thinks everything is under control, someone will notice a problem in the materials, perhaps an error in the chief executive officer’s name. Designers should fix all errors.

Clarity in the structure of a course is essential if the course is to be effective. Designers do not devote weeks and months to preparing a course just to watch facilitators struggle with the flow of the course or the participants roll their eyes skyward. Pilot tests often reveal holes in the population analysis, indicating that it undershot or overshot the average learner. It is the designer’s responsibility to adjust the population information and content to match the pilot test’s findings.

Segment timing is sometimes the hardest task for a designer.

Evaluation During Implementation

The process of evaluation in implementation was covered in great detail in the preceding chapter. Elements of this evaluation process include pre-course, delivery, and learner evaluations at Levels 1 and 2. Other important aspects of this evaluation are the use of the quality rating rubrics for objectives, design plans, and lesson plans.

Evaluation of Your Evaluations

Designers who have evaluated everything in the other four phases will probably find that the evaluation phase is the easiest part of the evaluations. Evaluation products that designers complete during the evaluation phase may include project-end reviews and program evaluations for grants. Each of these is important and requires designers to do some thoughtful retrospection of both process and product for the project.

Project-end reviews have two purposes. First, they look at how well the process worked for delivering the project. Designers should conduct these reviews whether they are working alone or have 30 staff members. To arrive at some objective data, it is important that each person involved reflect on what happened and share those observations with the other people involved.

If the training was contentious, it is best for the initial feedback to be gathered anonymously because participants may not want to give honest evaluations if they fear reprisals for their answers. Later, the designer can bring everyone together and work through the problems. If the problems are not fixed at the evaluation stage, they will be repeated.

Grants usually require program evaluations because the groups that give money want to know what they got for it. These evaluations give designers an opportunity to highlight the best parts of the project.

Evaluation data, when presented with graphs or other visual elements, make the case for success. Designers should review the objectives and course rationale and then ensure that the evaluation underscores the results that support those goals.

REFLECTION

Psychometrics in the field of ISD is beginning to be discussed among designers and others involved in course development. This uptick in discussion of statistical analysis of ISD joins just about every other professional endeavor as they try to capture accurate and unbiased data on every aspect of life.

As a practitioner in the field of ISD, what kinds of psychometric data do you think are important to gather and review?

Do you ever feel that going to this level of evaluation is unnecessary for the work that you perform in the field?

Psychometrics

One of the interesting aspects of evaluation, or assessment, as some call it, is the field of psychometrics, which is generally related to psychological measurement of skills and knowledge. One branch of this field is related to the construction and validity of analytical and evaluative measures such as tests, quizzes, and other Level 2 measures. There is also another facet of the field that studies the theoretical aspects of testing.

There is a fair amount of controversy relating to some aspects of this work since some of the evaluations are related to IQ, going back to the Stanford-Binet IQ Test. Later offshoots of this are the Myers-Briggs Type Indicator evaluations and the Minnesota Multiphasic Personality Inventory. While you may or may not run into these or similar evaluative instruments, if you do, make sure you do the necessary research to determine the validity and usefulness of these for your specific situation. You may also find it interesting to review some of the literature and theoretical papers offered from the field.

Summary

Evaluation in the field of instructional design is a multifaceted process that involves different approaches and instruments to measure every aspect of the design process and product. A designer must be certain and be able to show objectively that every facet of the design process meets or exceeds expected standards of performance. Additionally, learner evaluation must be the keystone of any course design to ensure and be able to measure content mastery.

DISCUSSION QUESTIONS

• As a designer, what do you think is the single most important element of evaluation?

• Is it ever reasonable to not measure mastery in learners?

• Are there ever times when a learner’s apprehension of the mastery of content will impact results?

• What are the most important aspects of evaluating the design process itself?

• Does the degree of difficulty evaluation provide data that assists in choosing content for a specific population?

CASE STUDY 1

A local community organization that you have as a client doesn’t see the benefit of having a rigorous evaluation process for mastery in learners in all of its courses. It is more concerned with learners enjoying taking the courses and recommending them to friends and associates. It also says that several learners have complained that the tests were too hard and didn’t prove anything. The organization has asked you to present to it your reasons for continuing to offer evaluations of mastery as part of their courses.

Facts:

• Organization offers courses in gardening

• Presently 25 courses

• Average of 20 learners in each course

• Courses are offered at community centers, public gardens, arboretums, and similar facilities

• Facilitators are mostly community volunteers with no training in facilitation of the courses

• Learners are awarded certificates for participation

What is your report to the organization?

CASE STUDY 2

An international maritime training center has asked you to review several of its courses in the training of merchant marines to be able to defend their ships against attacks by pirates. The courses are well received by the participants, but they do not have any formal review or approval by any external maritime organization. The outside groups are requesting detailed information concerning the courses, their design approach, and any data relating to reliability of the courses to teach proven methods for defending the ships.

What is your first step in addressing the requirements for documentation and proof of conformity to the expressed objectives of the courses?

There is a list of well-written course objectives. How will you show that the objectives are taught and mastery is achieved by the learners?

CASE STUDY 3

A motorcycle maintenance company is getting a lot of complaints that its courses are not meeting the needs of the dealerships it is serving. Technicians are saying that the information they received in the courses is dated by the time they have to use it. They are also saying that the materials—like checklists—that they are given in the courses are no longer current or accurate.

You have agreed to take the company as a client and you are meeting with the board of directors and training manager in several days to begin the process of reviewing the problems. You know as a designer that the longer course content is not used after implementation, the less likely it is to be useful or remembered accurately by learners.

How will you address the problem and where will you begin?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.214.215