19

The Practice of Human Factors

We do not have to experience confusion or suffer from undiscovered errors. Proper design can make a difference in our quality of life … Now you are on your own. If you are a designer, help fight the battle for usability. If you are a user, then join your voice with those who cry for usable products.

D. Norman
2013

INTRODUCTION

Our fundamental premise is that system performance, safety, and satisfaction can be improved by designing for human use. Objects as simple as a hammer or as complex as heavy construction equipment, or the complicated interactions arising between people and machines on a factory floor, or between people and their electronic devices, can benefit from a human factors analysis. Armchair evaluations, or “common sense” approaches, to most of the design issues discussed in this book will not ensure ergonomically appropriate designs. If common sense were all that was necessary to design safe and usable products, then everyone would be able to use their Blu-ray and DVD players, pilot error would not be the cause of many air-traffic accidents, secretaries would not complain about their computer workstations, and there would be no human factors science and profession.

Physical and psychological aspects of human performance in laboratory and work environments have been studied for more than 150 years. Consequently, we know a lot about factors that influence human performance and methods for evaluating performance under many different conditions. In this book, we have examined perceptual, cognitive, and motoric aspects of performance as well as some environmental and social factors, retaining throughout the conception of the human as an information-processing system. The value of this viewpoint is that both the human and machine components within the larger system can be evaluated using similar criteria.

The system concept is a framework for studying the influence of design variables (Pew & Mavor, 2007). Within this framework we evaluate the performance of the components, as well as overall system performance, relative to the system goals. Without such a framework, human factors would consist of an uncountable number of unrelated facts, and the way we apply these facts to specific design problems would be unclear. We would know that users prefer entering data with one software interface rather than another, or that operators of a control panel respond faster and more accurately when a critical switch is located on the left rather than the right. However, we would be unable to use this information to generalize to novel tasks or environments. Each time a new design problem surfaced, we would have to begin from scratch.

The body of design-related knowledge provided by human factors research, called human–­systems interface technology, can be divided into five categories (Hendrick, 2000):

Human–machine interface technology: design of interfaces for a range of human–machine systems to enhance usability and safety;

Human–environment technology: design of physical environments to enhance comfort and performance;

Human–software technology: design of computer software for compatibility with human cognitive capabilities;

Human–job interface technology: designing work and jobs to improve performance and productivity;

Human–organization interface technology: a sociotechnological systems approach in which the larger organizational system in which a person operates is taken into consideration.

We take the scientific knowledge behind each of these technologies and apply it to specific design problems. You now have read a lot about these kinds of problems and the techniques used to solve them. You also probably realize now that designing for human use requires contributions from many different people with different points of view. In fact, human factors/ergonomics is “a multidisciplinary endeavor that involves the design and engineering of systems for human use” (Dempsey, Wogalter, & Hancock, 2000, p. 6). In this final chapter, we will examine the issues that arise in the practice of human factors, and the interactions that human factors specialists have with other members of a design team.

The human factors specialist plays, or should play, an active role in many stages of the development process for systems and products (McBride & Newbold, 2016; Meister & Enderwick, 2002). Often the first step in this process is convincing management that the benefits of human factors analyses outweigh their costs. When everyone agrees that such analyses are necessary, the human factors expert needs to be careful not to make vague prescriptions, such as “Don’t put the control too far away.” When possible, she must provide the other members of the design team with quantitative predictions of performance for different design alternatives, and this is not a trivial task, as we have seen. The most detailed model of human performance may not be formulated for the application that is the target of the design team. Consequently, the human factors expert can use an engineering model, developed to make “ballpark” predictions for specific applications, or develop a more refined prediction from an integrative cognitive framework.

After the design phase is over and products are ready to go to market, the organization that made them will be concerned with safety and liability. If the product causes an accident or injury, or if a consumer is using the product when an accident or injury occurs, the organization may be held liable. Litigation may arise over issues of usability engineering, such as whether the product presented an unreasonable hazard to the user while performing the task for which it was intended.

We will examine each of these issues in turn in this chapter.

SYSTEM DEVELOPMENT

Although interest in understanding the role of humans in systems and accommodating that role in design has a history of more than 60 years, there has been a continuing concern that, in each phase of development, the human element is not sufficiently considered along with hardware and software elements. When information about the performance characteristics and preferences of operators and users is not introduced early enough in the process, there are higher risks for both simple and catastrophic failures in the systems that are produced.

R. W. Pew & A. S. Mavor
2007

MAKING THE CASE FOR HUMAN FACTORS

The consideration of the human element early in each phase of the design process advocated by Pew and Mavor (2007), consideration that is necessary to ensure that a system will operate as intended, is often not obvious to design team members who are not human factors specialists. The human factors specialists will have to convince managers, engineers, and other organizational authorities that the money invested in a human factors program is well spent. This will not always be easy: the costs of human factors analyses in both time and resources are readily apparent to management, but the benefits are often not as immediate and, in some cases, are difficult to express in tangible monetary values (Rensink & van Uden, 2006; Rouse & Boff, 2012). However, it should be obvious by now that human factors analyses improve safety and performance, which in turn translates into financial gains (Karat, 2005). Benefits arise from both improvements in equipment, facilities, and procedures within the organization, and improved usability of products produced by the organization.

An ergonomics program within an organization can increase productivity and decrease overhead, increase reliability, reduce maintenance costs, and increase safety, as well as improve employees’ work satisfaction and health (Dillon, 2006; Rensink & van Uden, 2006). An ergonomic approach to product design reduces cost by identifying design problems in the early development stages, before a product is developed and tested. The final product will have reduced training expenses, greater user productivity and acceptance, reduced user errors, and decreased maintenance and support costs (e.g., Marcus, 2005). The benefits of ensuring usability can be particularly substantial for design of website, because poorly designed sites will force users to use a competitor’s instead (Mayhew & Bias, 2003; Richeson, Bertus, Bias, & Tate, 2011).

Making the case for early consideration of human factors is easier when human factors specialists understand the perspective of managers in an organization and how human factors relates to the organization’s strategic goals (Village, Salustri, & Neumann, 2013). A cost-benefit analysis is an effective way to communicate with management and convince them of the need to support ergonomics programs and usability engineering. There are several ways to conduct such an analysis (Rouse & Boff, 2012). The results of this analysis can then be presented in terms of the amount of money that the company will save through supporting such programs, an approach to argument based on return on investment (ROI; Richeson et al., 2011).

This approach was used by human factors engineers at the Shell Netherlands Refinery and Chemical complex. They developed a systematic cost-benefit method for helping to determine the costs and benefits for an ergonomically sound plant design (Rensink & van Uden, 2006). The method allowed designers “to visualize the potential benefits of ergonomic design and to serve as an aid to process technicians, human factors engineers and project managers who have to make decisions about the design in new construction of improvement projects” (Rensink & van Uden, 2006, p. 2582). The procedure resulted in a table with labeled columns for each of eight areas that may benefit from improved design (e.g., worker health). The table rows are specific benefits that may yield gains for more than one of the areas (e.g., reducing on-the-job accident rates; see Figure 19.1). With such a table it was easy to see the impact that a specific benefit might have across a range of areas, which in turn made it easier to assign values to the benefits. This meant that designers and managers could more easily identify cost savings and intangible benefits for safety and health.

FIGURE 19.1Cross-reference quick-scan benefit table.

By performing cost-benefit analyses, human factors specialists not only justify their own funding, but also:

1.Educate themselves and others about the financial contribution of human factors,

2.Support product management decisions about the allocation of resources to groups competing for resources on a product,

3.Communicate with product managers and senior management about shared fiscal and organizational goals,

4.Support other groups within the organization (e.g., marketing, education, maintenance), and

5.Get feedback about how to improve the use of human factors resource from individual to organization-wide levels. (Karat, 1992, 2005)

There are three steps to a basic cost-benefit analysis (Karat, 2005). First, identify those variables associated with costs and benefits for a project, along with their associated values (Richeson et al., 2011). Costs include salaries for human factors personnel, building and equipping facilities for conducting usability tests, payments for participants in the tests, and so on. Benefits, as noted, include things such as reduced training time and increased sales. Second, analyze the relationship between the costs and benefits, estimating the ROI for the cost. At this stage, you may develop a number of alternative usability proposals and compare them with each other. Finally, someone must decide how much money and resources will be invested in the human factors analysis, and the ROI provides a metric that anyone can understand.

Estimating the costs and benefits associated with implementing an occupational ergonomics program or a human factors program in system and product development is difficult (Beevis, 2003). However, it can be done effectively and, as illustrated below, will usually indicate the value of the program.

Occupational Ergonomics Programs

We will assess many different cost outcomes when we implement an ergonomics program for ­redesigning work (Macy & Mirvis, 1976). These include absenteeism, labor turnover, tardiness, human error, accidents, grievances and disputes, learning rate, productivity rate, theft and sabotage, inefficiency or yields, cooperative activities, and maintenance. Good work conditions also provide commercial and personal benefits by promoting increased comfort, satisfaction, and positive attitudes toward work (Corlett, 1988).

A case study of the implementation of an ergonomics program at a wood processing plant illustrates the benefits derived from such a program (Lahiri, Gold, & Levenstein, 2005). The company conducted ergonomic evaluations for the jobs performed by forklift operators, machine operators, crane operators, technicians, and general production workers. Based on these evaluations, they introduced workstation modifications that included adjustable chairs, lift tables, conveyors, grabbers, floor matting, and catwalks to be used instead of ladders. In addition, the company hired a physical therapist to teach the workers exercises that would help prevent musculoskeletal disorders. The company reported such benefits as a reduction in the number of cases of lower back pain (and the resulting loss of productive work time) and a 10% increase in productivity for all workers. They estimated the total financial benefit to be approximately 15 times greater than the cost of the program.

Another company reported similar benefits. A leading brewery instituted a manual handling ergonomics program for beer delivery personnel (Butler, 2003). Most of their beer deliverers start working for the company while in their 20s and work continuously until retirement. The daily load of beer deliveries that each person handles is large, and it has increased over the years as the company has added products to its offerings. The heavy physical demands of the delivery job force the delivery personnel to retire at the very early age of 45.

The development and implementation of a manual handling ergonomics program began in 1991. The company conducted task analyses for all manual materials activities performed by the delivery personnel, and they examined delivery sites for possible physical changes that would reduce handling difficulty. Some of the changes they implemented included lowering the loading height of the beer delivery vehicles and supplying cellar lifts for sites where the beer had to be lowered into a cellar. They also developed a training program. Everyone involved in beer delivery received 3 days of training on proper lifting and handling methods. The company reported a substantial decrease in work-related insurance claims and manual handling accident rates. They estimated the costs of the ergonomics program at $37,500, whereas the benefits were approximately $2.4 million.

Many companies may not have the resources for an occupational ergonomics program. However, some providers of office equipment supply their customers with ergonomic services at the time that the equipment is purchased (IBM, 2006; Sluchak, 1990). These include consultation about workstation design, assistance in conducting in-store evaluations of equipment, assistance in implementing employee training programs, recommendations for design of interfaces and website, and briefings on topics such as cumulative trauma disorders.

System and Product Development

The costs associated with incorporation of human factors into the system and product development process include wages for the human factors personnel. In addition, there are several distinct costs associated specifically with the human factors process (Mantei & Teorey, 1988, p. 430). These include expenses involved with evaluating the concerns of the intended user population in preliminary studies, constructing product or system mock-ups, designing and modifying prototypes, ­creating a laboratory and conducting advanced user studies, and conducting user surveys.

With regard to products such as computer software, the cost-benefit ratio will depend on the ­number of users affected by the ergonomic changes. Karat (1990) performed cost-benefit analyses for two software development projects that had incorporated human factors concerns. One of these projects was of small scale and the other was of large scale. She estimated the savings-to-cost ratio to be 2:1 for the small project and 100:1 for the large project. The savings that arise from human factors testing increase dramatically as the size of the user population increases (Mantei & Teorey, 1988). For smaller projects, a complete human factors testing program will not be cost-effective. Here again we see the importance of a cost-benefit analysis: not only can it justify human factors research, but it will also make the investment in such research commensurate with the expected savings-to-cost ratio.

Even in the military, we have to consider the benefits of investing in new systems to enhance human and system performance relative to the cost of such investments (Rouse & Boff, 2012). Benefits may include more precise and efficient weapons systems, increased operability of the system, improved design using new techniques, and new opportunities for military strategists. Costs are those associated with the initial research and development, recurring operating expenses, and development time. These costs and benefits accrue to other organizations as well, including the developers (the contracting companies who stand to benefit from receiving research and development funds) and the public (who stand to benefit from increased performance of the military’s missions).

THE SYSTEM DEVELOPMENT PROCESS

Phases

The development of a system is driven by the system’s requirements (Meister, 1989; Meister & Enderwick, 2002), and the primary goal of the development team is to produce a system that meets or exceeds those requirements. System development begins with a broadly defined goal and proceeds to more and smaller tasks and subtasks. Most system requirements do not include human performance objectives; initially, the requirements specify only how the physical system is to perform. Consequently, the human factors specialist must determine what the user will need from those physical requirements.

We have mentioned several times the importance of including human factors specialists from the outset of a project. The design decisions made at the early stages of system development have consequences for the later stages. From the initial decisions onward, there are human factors criteria, as well as physical performance criteria, that must be met if the system is to perform optimally. The U.S. military is well aware of this. In 1986, the U.S. Army initiated the MANPRINT program (Booher, 1990), which forces designers to deal with human factors concerns from the outset of the system development process.

The MANPRINT program is now called Human Systems Integration. It and other programs like it were established because failures to consider human factors concerns before initial design decisions were made had resulted in the production of equipment that could not be used effectively or meet its performance goals. For example, the U.S. Army’s Stinger anti-aircraft missile system, designed without human factors considerations, was supposed to be capable of successfully striking an incoming enemy aircraft 60% of the time. However, because the designers did not consider the skill and training required of the soldier operating it, its actual performance was closer to 30% (Booher, 2003b). Julia Ruck, of the U.S. Army, said in 2014, “As a former soldier who spent years at the tactical edge, I can honestly say that the MANPRINT program, with its focus on integrating that human element, makes the difference between a material solution being used or sitting on a shelf” (Conant, 2014).

System development proceeds in a series of phases (Czaja & Nair, 2012; McBride & Newbold, 2016; Meister, 2006b). The first phase is system planning. During this phase, we will identify the need for the system, that is, what the system is to accomplish, and make assessments of its feasibility. The second phase is preliminary design, or initial design, during which we will consider alternative designs for the system. In this phase we will begin testing, construct prototypes, and create a plan for future testing and evaluation of the system. The third phase is detail design, during which we will complete the development and testing of the system and make plans for production. The final phase is design verification, when we produce the system and then evaluate it in operation. Data about how effective the system is, its strengths and weaknesses, are used to improve the design in subsequent generations.

Several questions about human performance will arise at each phase of system development. These questions are shown in Table 19.1. At the system planning phase, the human factors specialist evaluates the changes in the task requirements, the personnel that will be required, and the nature and amount of training needed for the new system relative to its predecessor. She ensures that human factors issues are addressed in the system design goals and requirements.

TABLE 19.1

Behavioral Questions Arising in System Development

System Planning

1.Assuming a predecessor system, what changes in the new system from the configuration of the predecessor system mandate changes in numbers and types of personnel, their selection, their training, and methods of system operation?

Preliminary Design

2.Which of the various design options available at this time is more effective from a behavioral standpoint?

3.Will system personnel in these options be able to perform all required functions effectively without excessive workload?

4.What factors are potential sources of difficulty or error, and can these be eliminated or modified in the design?

Detail Design

5.Which is the better or best of the design options proposed?

6.What level of personnel performance can one achieve with each design configuration, and does this level satisfy system requirements?

7.Will personnel encounter excessive workload, and what can be done about it?

8.What training should be provided to personnel to achieve a specified level of performance?

9.Are (a) hardware/software, (b) procedures, (c) technical data, and (d) total job design adequately engineered from the human point of view?

Design Verification

10.Are system personnel able to do their jobs effectively?

11.Does the system satisfy its personnel requirements?

12.Have all system dimensions affected by behavioral variables been properly engineered from the human point of view?

13.What design inadequacies must be corrected?

System design, in both the preliminary and the detail phases, is concerned with generating and evaluating alternative design concepts. The human factors specialist focuses on issues such as allocation of function to machines or humans, task analysis, job design, interface design, and so on (Czaja & Nair, 2012). During preliminary design, the specialist will judge the alternative design concepts in terms of their usability. He will recommend designs that minimize the probability of human error. When development moves from the preliminary to the detail phase, many of the questions addressed in the preliminary phase will be revisited. The final system design will be engineered to accommodate human performance limitations.

Human factors activities in the design verification phase will help determine whether there are any deficiencies left in the final design. We will conduct tests on the final system in an environment that closely approximates the operational conditions to which the system will be subjected. We may conclude from these tests that there are design features that need to be changed before the product or system is distributed.

In each phase the human factors professional will provide four areas of support to the design team. He will provide input to the design of the system hardware, software, and operating procedures with the goal of optimizing human performance. He will also make recommendations regarding how system personnel should be selected and recruited. The third area for which the human factors specialist will provide support concerns issues of training: What type should be given, and how much is needed? Finally, he will conduct studies to evaluate the effectiveness of the entire system, and more specifically, of the human subsystem.

The systematic application of human performance data, principles, and methods at all phases of system development ensures that the design of the system will be optimized for human use. This optimization results in increased safety, utility, and productivity, and ultimately benefits everyone: the system managers, operators, and, ultimately, consumers.

Facilitating Human Factors Inputs

A central concern in human factors and ergonomics is how to get human factors experts involved in the design process, particularly in the early phases of development during which the initial design decisions are made. The design team often works under the pressure of a deadline, and they will focus primarily on developing a system or product that will achieve its primary development and operational goals. Consequently, the team may view incorporation of human factors methods and user/usability testing as a costly option that is not as important as other factors (Shepherd & Marshall, 2005; Steiner & Vaught, 2006).

Where should a human factors program be placed in an organization’s structure, when the organization implements one? Most people agree that the human factors specialists should be in a single, centralized group or department, under a manager who is sensitive to human factors organizational issues (Hawkins, 1990; Hendrick, 1990). A centralized group has several advantages that allow the human factors specialists to maximize their contributions to projects. The manager can champion human factors concerns at higher organizational levels, which is essential for creating an environment in which human factors will flourish. By establishing a rapport with persons in authority and increasing their awareness about the role of human factors in system design, the manager and group can ensure that their efforts will be supported within the organization. Further, financial support for laboratories and research facilities will be more reliable if there is a human factors group or department, rather than single individuals scattered throughout different departments. A stable human factors group also helps to establish credibility with system designers and engineers. Project managers are more likely to seek advice from the human factors specialists, because of their credibility and visibility. Finally, the group will foster a sense of professional identity that will boost morale and help in the recruitment of other human factors specialists.

It is an unfortunate fact that many engineering designers do not fully appreciate human factors, or believe that human factors can be addressed by anyone with knowledge of their projects. Therefore, it is important for human factors experts to raise awareness of the fact that more than just common sense is required to properly address human factors issues in the design process (Helander, 1999).

Apart from being welcomed on the design team, another problem human factors experts face is that everyone involved in the design of a system will view the problem from the standpoint of their discipline. Each will discuss problems using the vocabulary with which they are familiar and attempt solutions to problems using discipline-specific methods (Rouse, Cody, & Boff, 1991). A designer may not know what questions to ask the human factors specialist, or how to interpret the answer that she provides. Communication difficulties may result in loss of human factors information, and so the recommendations provided by the human factors specialist may have little impact on the system development process.

To prevent information loss, the human factors specialist has the responsibility of knowing at least something of the core design area (e.g., automobile instrument displays) to which she is contributing. Similarly, designers, engineers, and managers need to learn about human factors and ergonomics. Blackwell and Green (2003) suggest that human factors experts, designers, and users all learn a common set of cognitive dimensions along which design alternatives may differ. The structure provided by these dimensions will provide a common ground for communication about usability issues.

We have noted several times that it is difficult to get appropriate human factors input in the early planning and design phases of system development. In fact, designers frequently wait to worry about human factors issues until late in the detail design phase, well after many crucial decisions have already been made (Rouse & Cody, 1988). Consequently, the contribution of the human factors specialist is diminished by the necessity of working around the established features of the designed system. As Shepherd and Marshall (2005) emphasize, “Human factors professionals must continue to address the question of how best to support organizations so that significant human factors issues can be taken into account during system development in time for problems to be identified and solved with minimum expense and inconvenience” (p. 720).

One way we can solve this problem is through an approach called scenario-based design (Haimes, Jung, & Medley, 2013), in which the human factors professional develops scenarios depicting possible uses of a product, tool, or system (see Box 19.1). Another way is the participatory design approach, using methods such as focus groups to obtain input from intended users about their wants and needs for the product or system under development (Clemensen, Larsen, Kyng, Morten, & Kirkevold, 2007). Finally, we may implement system models that allow the human factors specialist and the designer to collaborate in evaluating alternative designs before prototypes have been developed. Integrative and engineering models of human performance and human motion, described in the next section, bring existing knowledge to bear on initial design decisions.

COGNITIVE AND PHYSICAL MODELS OF HUMAN PERFORMANCE

Our discussions of human information processing and basic anthropometric characteristics in the earlier chapters emphasized how human performance is limited by characteristics of tasks and work environments. We have explored basic principles, such as the fact that a person’s performance deteriorates when his working memory is overloaded and that his movement time is a linear function of movement difficulty (Fitts’s law). We have also discussed many theories that can explain these phenomena. This foundation, formed from data and theory, must be understood by anyone who wishes to incorporate human factors and ergonomics into design decisions.

When faced with a specific design problem, you might begin by searching for information about how similar problems have been solved before. You can consult a variety of sources (Rouse & Cody, 1989): human factors textbooks like the present one; textbooks that cover specific content areas in more detail (e.g., attention; Wickens & McCarley, 2008); handbooks that provide detailed treatments of topics and prescribe specifications (e.g., Salvendy, 2012); journal articles (e.g., from Human Factors); and standards and guidelines (e.g., Karwowski, 2006b). Unfortunately, there is no easy way to determine exactly what factors will be critical for your specific problem and how they may interact with each other.

This is where models of human performance come in. Quantitative and computational models have played a significant role in human factors and ergonomics throughout the existence of the field (Byrne & Gray, 2003). We have encountered many such models throughout this book. However, a lot of these models were formulated to explore very narrow problems (e.g., the “tapping” or aimed movements that are the focus of Fitts’s law), and so they may not be very useful for human factors engineering problems. Some researchers have made a greater effort to develop “general-purpose” models, which focus on information processing in human performance (e.g., Byrne, in press) and physical models of human motion (e.g., Haupt & Parkinson, 2015).

As an example, consider the problem of attention. In this book, we have talked about how attention has been studied in the laboratory, and there are a lot of models of how attention works. Logan (2004) reviewed formal theories of attention and concluded that two classes, one based on signal-detection theory (see Chapter 4) and another on what is called similarity-choice theory, ­provide the best overall accounts of a range of attention phenomena. Logan states, “Their mathematical structure allows strong inferences and precise predictions, [and] they make sense of diverse phenomena” (p. 230). As important as these models and theories are for people who study human attention, they may not tell you, the human factors expert, what you need to know for your design problem.

BOX 19.1SCENARIO-BASED DESIGN

Scenario-based design is an alternative approach for incorporating human factors into the design process (Carroll, 2006). Scenario-based design has been used extensively in the area of human–computer interaction, so much so that Carroll (2002) says that it “is now paradigmatic” (p. 621); that is, accepted practice. However, scenario-based design has not been adopted as widely within system design more generally (van den Anker & Schulze, 2006).

The human factors expert using a scenario-based approach generates narratives depicting various ways that a person might use a software tool or product. These narratives are then used to guide the design process, from addressing human factors requirements through the testing and evaluation of the tool. By exploring possible scenarios, the designer will discover potential difficulties that users may encounter and identify functions that would be beneficial for specific purposes.

Scenario-based design is important because it addresses several challenges in the design of technology (Carroll, 2006). First, scenarios require the designer to reflect on the purposes for which a person would be using the product or system, and the reasons why they might be using it. Scenarios focus the designer’s attention on the context in which the product will be used. Second, scenarios make the task environment concrete: They describe specific situations that can be easily visualized. This in turn means that the designer will be able to view the problem from a number of different perspectives, and visualize and consider alternative solutions.

Third, because scenarios are oriented toward the work that people will perform, they tend to promote work-oriented communication among the designers and the people who will use the product. Fourth, specialized scenarios can be abstracted from more general scenario categories. The way that the designer implements these more specific scenarios can rely on any prior knowledge that was used to implement the more general scenario. From this perspective, particular design problems can be solved by first classifying the problem according to what kind of scenario it is.

Scenarios can differ in their form and content (van den Anker & Schulze, 2006). They are most often narrative descriptions or stories. As these narratives become more refined, they can be depicted visually in storyboard drawings and graphics, and even in the form of simulated or virtual environments. They can focus on the activities of an individual user or on collaborative activities between multiple users. The computer-supported cooperative work described in Chapter 18 is an example of collaborative activities that might lend itself well to a scenario-based design. Furthermore, their level of abstraction can vary greatly; general scenarios with little detail can guide early design decisions, and as the design begins to take shape, the scenarios may include a great level of detail.

Designers can develop and apply scenarios in a variety of ways. At later design stages, as in a participatory design approach, people who represent the end users may contribute to the process. Often designers use “tabs” (like post-it notes) on a storyboard that represent the interface control and display functions that different users will perform (Bonner, 2006). Users then pull off the appropriate tab from the board and place it on a mock product to indicate the action that they would perform at a particular point in a task.

A case study employing scenario-based design focused on how agencies within the U.S. Department of Homeland Security (DHS) monitor and respond to emergencies (Lacava & Mentis, 2005). Engineers at Lockheed Martin wanted to design a command and control system for DHS. This particular problem was well suited to scenario-based design, because the software engineers didn’t even know who would be using their system. Other problems they faced included not really knowing what kinds of problems a DHS agency might be faced with, and the agencies themselves couldn’t say exactly how they would carry out assignments.

The designers, focusing on the Coast Guard, began by devising scenarios. Each scenario had a setting, actors working to achieve specific goals within that setting, and a plot detailing the sequence of actions taken by the actors in response to events within the setting. Issues that they encountered in the design process included how information was shared between different DHS agencies, how the information flowed from top-level intelligence to the lower-level individuals within the Coast Guard (and back up again), who within the Coast Guard would be responding to the information, and how the information needed to be displayed to the people at all levels of information flow. Constant refining of these scenarios resulted in a prototype system that they were able to present to the Coast Guard. Once the Coast Guard realized that the design team had accurately identified the problems involved with managing first response teams and had some concrete solutions, the designers convinced the agency to work with them more closely. With a group of actual users, the design team then proceeded with a more traditional user-centered design.

Although many information-processing models are not directly applicable to design issues, modeling is valuable to design engineers for several reasons (Gray & Altmann, 2006; Rouse & Cody, 1989). A model forces rigor and consistency in the analyses. It also serves as a framework for organizing information and indicating what additional information is needed. A model is also capable of providing an explanation for why a particular result occurs. Perhaps most importantly, designers can incorporate the quantitative predictions provided by a model into design decisions, but this is more difficult to do with only vague recommendations derived from guidelines and other sources.

The benefits provided by formal models are so considerable that many people have worked to develop general frameworks and models that allow a designer or modeler to predict human performance in specific task contexts (Elkind, Card, Hochberg, & Huey, 1990; Gluck & Pew, 2005; McMillan et al., 1989; Pew & Mavor, 1998). We summarize several approaches in the following sections.

ENGINEERING MODELS OF HUMAN PERFORMANCE

The primary purpose of engineering models of human performance is to provide “ballpark” values of some aspect of performance, for example time to perform a task, in a simple and direct manner. Engineering models of human performance should satisfy three criteria (Card, Moran, & Newell, 1983). First, the models should be based on the view of the person as an information processor. Second, the models should emphasize approximate calculations based on a task analysis. The task analysis determines those information-processing operations that might be used for achieving the task goals. Third, the models should allow performance predictions for systems while they are still in the design phase of development, before they have been built.

In sum, an engineering model of human performance should make it easy for a designer to provide approximate quantitative predictions of performance for design alternatives. We will describe two types of engineering models of human performance that satisfy these criteria: cognitive models developed primarily from research in cognitive psychology, and digital human models developed primarily from research in anthropometrics and biomechanics.

Cognitive Models

The most widely used cognitive engineering models are based on a framework developed initially by Card et al. (1983) for application to the domain of human–computer interaction. This framework, described briefly in Box 3.1, has two components. The first is a general architecture of the human information processing system called the Model Human Processor. It consists of a perceptual processor, a cognitive processor, and a motor processor, as well as a working memory (with separate visual and auditory image stores) and a long-term memory (see Figure 19.2). Each processor has one quantitative parameter, the cycle time (time to process the smallest unit of information), and each memory has three parameters: the storage capacity (in chunks), the decay time (in seconds), and the code type (acoustic or visual). These parameters are presumed to be context-free; that is, their values will be the same regardless of the task being performed. Estimates of their values are determined from basic human performance research and “plugged in.”

FIGURE 19.2The Model Human Processor.

Table 19.2 summarizes the principles of operation of the Model Human Processor. Many of these principles are based on fundamental laws of human performance that we described in earlier chapters. The most fundamental of these is the rationality principle.

TABLE 19.2

The Model Human Processor—Principles of Operation

P0. Recognize-Act Cycle of the Cognitive Processor. On each side of the Cognitive Processor, the contents of Working Memory initiate actions associatively linked to them in Long-Term Memory; these actions in turn modify the contents of Working Memory.

P1. Variable Perceptual Processor Rate Principle. The Perceptual Processor cycle time τp varies inversely with stimulus intensity.

P2. Encoding Specificity Principle. Specific encoding operations performed on what is perceived determine what is stored, and what is stored determines what retrieval cues are effective in providing access to what is stored.

P3. Discrimination Principle. The difficulty of memory retrieval is determined by the candidates that exist in the memory, relative to the retrieval cues.

P4. Variable Cognitive Processor Rate Principle. The Cognitive Processor cycle time τ c is shorter when greater effort is induced by increased task demands or information loads; it also diminishes with practice.

P5. Fitts’s Law. The time T pos to move the hand to a target of size S which lies a distance D away is given by:

Tpos=IMlog2(2DS+0.5)

where 70 < I M <120 (approximately), and we may fix I M = 100 in most circumstances.

P6. Power Law of Practice. The time T n to perform a task on the nth trial follows a power law:

Tn=T1nα,

where 0.2 < −α < 0.6 (approximately), and we may fix −α = 0.4 in most circumstances.

P7. Uncertainty Principle. Decision time T increases with uncertainty about the judgment or decision to be made:

T=IcH

where H is decision uncertainty (in bits), and 0 < I C < 157 (approximately), and we may fix I C = 150 in most circumstances.

For n alternatives with different probabilities, pi, of occurrence,

H=Σpilog2(1pi+1).

P8. Rationality Principle. A person acts so as to attain his goals through rational action, given the structure of the task and his inputs of information and bounded by limitations on his knowledge and processing ability:

Goals+Task+Operators+Inputs+Knowledge+ProcesslimitsBehavior.

P9. Problem Space Principle. The rational activity in which people engage to solve a problem can be described in terms of (1) a set of states of knowledge, (2) operators for changing one state into another, (3) constraints on applying operators, and (4) control knowledge for deciding which operator to apply next.

The rationality principle is the assumption that the user acts rationally to attain goals. If an individual acts irrationally, analyzing the goal structure of the task would not serve any useful purpose. The rationality principle justifies the second major criterion of the engineering model: a task analysis framed in terms of goals and requirements. In the Model Human Processor, the task analysis determines the Goals, Operators, Methods, and Selection rules (GOMS) that characterize a task, as we described in Chapter 13.

After the GOMS analysis determines the goal structure, we can specify the information-processing sequence by defining the methods for achieving the goals, the elementary operations from which the methods are composed, and the selection rules for choosing between alternative methods. The end result is an information-processing model that describes the sequence of operations executed to achieve goals pursuant to performance of the task. Table 19.3 shows an example model for deciphering vowel-deletion abbreviations that describes the goal structure at the keystroke level. By specifying cycle times for the execution of the elementary operations, the model will generate a prediction for the time it will take to perform the task.

TABLE 19.3

GOMS Algorithm for Figuring out Vowel-Deletion Abbreviations

Algorithm

Operator Type

BEGIN

Stimulus ← Get-Stimulus(‘‘Command’’)

Perceptual

Spelling ← Get-Spelling(Stimulus)

Cognitive

Initiate-Response(Spelling[First-Letter])

Cognitive

Execute-Response(Spelling[First-Letter])

Motor

Next-Letter ← Get-Next-Letter(Spelling)

Cognitive

REPEAT BEGIN

IF-SUCCEEDED Is-Consonant?(Next-Letter)

Cognitive

THEN BEGIN

Initiate-Response(Next-Letter)

Cognitive

Execute-Response(Next-Letter)

Motor

Next-Letter ← Get-Next-Letter(Spelling)

Cognitive

END

ELSE IF-SUCCEEDED Is-Vowel?(Next-Letter)

Cognitive

THEN Next-Letter ← Get-Next-Letter(Spelling)

Cognitive

END

UNTIL Null?(Next-Letter)

IF-SUCCEEDED Null?(Next-Letter)

Cognitive

THEN BEGIN

Initiate-Response(‘‘Return’’)

Cognitive

Execute-Response(‘‘Return’’)

Motor

END

END

To illustrate how such an approach can be used, consider an experiment conducted by Ramkumar et al. (2016) that examined interactive medical image segmentation. This is a process that uses image manipulation software to partition a digital image (such as an X-ray) into nonoverlapping regions to aid in medical diagnoses and planning treatments. They had three physicians segment images of body organs in preparation for radiation therapy. They used two task prototypes, one that required the physician to draw contours of an anatomical structure to segment it, and one that required the physician to draw strokes that indicated the desired foreground and background of the organ, from which an algorithm created the segment. Each physician segmented images of four organs (the spinal cord, the lungs, the heart, and the trachea) using both prototypes.

Using a GOMS analysis of videos of the physicians’ performance, Ramkumar et al. (2016) identified 16 operators and 10 methods that were used to achieve the goal of segmentation of an organ. Operators included moving the cursor from the drawing region to a panel to select a tool, drawing a contour, and so on. The methods included combinations of the operators that were often performed together; for example, segmenting a single region by executing a click paint (the target region) operator, followed in succession by mouse move and draw operators. The researchers showed that the strokes approach was faster than the contour approach for large organs like the lungs, though not for smaller ones. However, though faster in some cases, the strokes approach also led to more errors than the contour approach, a finding that the researchers attributed to the fact that the strokes approach required more switching between tools than the contour approach.

The original GOMS and Model Human Processor framework has a number of shortcomings, which limit the accuracy of its predictive ability. It does not provide an account of performance changes that occur as skill is acquired, does not predict errors, assumes strictly serial processing, and does not address the effects of mental workload (Olson & Olson, 1990). However, extensions of the framework addressed issues of learning and errors (e.g., Lerch, Mantei, & Olson, 1989; Polson, 1988), and a family of GOMS models has been developed that has been successful at predicting efficiency of performance for a variety of tasks involving human–computer interaction (John, 2003; Olson & Olson, 1990). These include the use of text editors, graphics programs and spreadsheets, entering different kinds of keyboard commands, and manipulating files. GOMS models have also been used to generate a range of stimulus-response compatibility effects (see Chapter 13; Laird, Rosenbloom, & Newell, 1986) and to analyze the tasks performed by pilots with the flight management computer of a commercial aircraft (Irving, Polson, & Irving, 1994).

A variation of a GOMS analysis was used to design workstations for telephone toll and assistance operators (Gray, John, & Atwood, 1993). The human factors experts modeled operators’ performance at a new workstation, which a telephone company was considering for purchase, and compared their predictions with the operators’ performance at their old workstations. According to the workstation designers, the new workstation would reduce the average time each operator spent per call and so save the company money. However, the GOMS analysis predicted that the time per call would actually be longer with the new station than with the old. The researchers confirmed this prediction in a subsequent field study.

Digital Human Models

Digital human models are software design tools intended primarily for physical ergonomics, which deals with positions adopted by the human body and the loads imposed on it (Chaffin, 2005; Duffy, 2009). These tools allow a designer to create a virtual human being with specific physical attributes. The designer can then place the digital person in various environments and program it to perform specific tasks, like getting into an automobile or using a tool in a particular work environment. This enables the designer to evaluate the physical advantages and disadvantages of alternative designs relatively quickly and easily. More detailed aspects of performance, such as time and motion, field of view, work posture, and reach, are also available for additional analysis.

Any software system for creating digital humans must incorporate five elements (Seidl & Bubb, 2006). (1) The design of the digital human must take into account the number and mobility of the joints and accurately depict clothing; (2) the software must integrate anthropometric databases to assist in generation of digital humans with specific anthropometric characteristics; (3) the software must simulate posture and movement; (4) the software must include a way to analyze attributes relevant to the product or system being developed; (5) it should be possible to integrate the digital human model into a virtual world representing the design environment. Different digital modeling systems will vary with respect to the anthropometric database used, the algorithms used to simulate motion, and the analysis tools that are available.

Digital models cannot do everything. They have only a limited ability to incorporate differences in human sizes and shapes, to reproduce human body posture, and to predict human motion patterns (Woldstad, 2006). Also, because it is not always clear how these packages choose the algorithms they use to construct the models, it can be difficult for the designer to judge the accuracy of the “people” they produce.

JACK and RAMSIS are two digital human modeling tools used by designers in the analysis of automobile interiors (Hanson, Blomé, Dukic, & Högberg, 2006; Seidl & Bubb, 2006). To increase the applicability of these tools and to make them widely available within an organization (Saab Automobile, Sweden), Hanson et al. developed an internal Web-based usage guide and documentation system accessible by all members of a design team. The guide outlined a series of steps beginning with identifying the goal of modeling, how to prepare and use the modeling tool (modeling the people, physical environment, and task), and how to formulate recommendations (including results and discussion). They stored the data from all analyses (even those still underway) in a centralized data base. The result was an efficient system for organizing, conducting, and documenting simulation projects within the organization.

INTEGRATIVE COGNITIVE ARCHITECTURES

The engineering models that we have discussed so far are not very precise, although they are adequate for many purposes. In some cases, however, we may need more accurate predictions. These can be provided by integrative cognitive architectures. An integrative cognitive architecture is a relatively complete information-processing system, or unified theory, intended to provide a basis for developing computational models of performance in a range of specific tasks. This approach was introduced in Box 4.1, in which we mentioned three of the most prominent cognitive architectures: ACT-R (Adaptive Control of Thought–Rational; Boorst & Anderson, 2015), Soar (States, Operators, and Results; Howes & Young, 1997), and EPIC (Executive Process Interactive Control; Kieras, 2017).

All of these architectures are production systems (see Chapter 11), which rely on production rules (IF … THEN statements that specify actions that occur when certain conditions are met) and a memory representation of the task to model cognitive processing. When the conditions for a given production are present in working memory, the production will “fire,” and this produces a mental or physical action. The architectures differ in details, such as the extent to which processing is serial or parallel, and whether the architecture is more applicable to higher-level cognitive tasks, such as language learning and problem solving, or to perceptual-motor tasks, such as responding to two simultaneous stimuli.

The first applications of ACT and Soar architectures were cognitive tasks that required problem solving, learning, and memory. In contrast, because EPIC includes perceptual and motor processors, EPIC could simulate many aspects of multiple-task performance (for instance, driving; see Chapter 9). The most recent version of ACT-R, version 7.3 (Bothell, 2017), also includes perceptual (vision and audition modules) and motor (motor and speech modules) processors, and can model multiple-task performance. Although all these architectures were first developed to provide comprehensive accounts of basic cognitive phenomena, they have been used for applied problems in areas such as driving and interactions with digital devices.

Developing models within one of these architectures is a complex, time-consuming process that will require training. Even a skilled modeler will be challenged to develop models for a specific design problem, such as determining which of several alternative interfaces is best for a system. Consequently, some modelers have tried to simplify the modeling process (e.g., Salvucci & Lee, 2003).

Some engineering models can simulate human performance in complex human–machine systems. As Henninger and Whitaker (2015, p. 86) note, “Human behavioral modeling is motivated by the need to understand how people will react to a variety of possible environmental stimuli. It is used in war gaming … to understand enemy reactions, for marketing and product development decisions, in policy development for understanding policy alternatives and by organizational analysts to support organizational decisions.” One model of this type is called the Human Operator Simulator (Harris, Iavecchia, & Dick, 1989; Pew, 2008), which helps design interfaces for weapon systems. It is a software system consisting of a resident Human Operator Model and a language that the designer uses to specify equipment characteristics and operator procedures. Much like other cognitive architectures, the Human Operator Model contains information-processing submodels for performing different subtasks (“micro-actions”). The major process submodels in the Human Operator Simulator are shown in Figure 19.3.

FIGURE 19.3Major submodels and knowledge lists in the Human Operator Simulator.

For simulating performance in a variety of weapons and flight systems, the designer must specify three major components of the task (see Figure 19.4; Harris et al., 1989, p. 286):

FIGURE 19.4The three major Human Operator Simulator simulation components that are connected through the interface.

1.Environment (e.g., number, location, speed, and bearing of targets);

2.Hardware system (e.g., sensors, signal processors, displays, and controls);

3.Operator procedures and tactics for interacting with the system and for accomplishing mission goals.

The designer must also specify the interfaces between the three components: how information is passed from one component to the others. These interfaces determine such things as how well the hardware will detect changes in the environment, how heat and other environmental stressors will affect performance, and how difficult it will be for the operator to perform the required tasks. The simulation will produce timelines and accuracy predictions for task and system performance. The Human Operator Simulator is well suited to analyzing effects of control/display design, workstation layout, and task design.

Another human performance modeling technology used by the military is called task network modeling, or discrete event simulation. This modeling strategy is incorporated in commercially available applications like Micro Saint Sharp (Schunk, Bloechle, & Laughery, 2002). To use a task network model, the designer must first conduct a task analysis to decompose a person’s functions into tasks, and then construct a network depicting the task sequence. After the initial task analysis, task network modeling is relatively easy to do and understand. It can include hardware and software models (which “plug in” at the appropriate points in the task network), which means that the complete human–machine system can be represented in the model (Dahn & Laughery, 1997). Another commercial application, the Integrated Performance Modeling Environment, combines the network modeling capabilities of Micro Saint Sharp with the modeling of the human information processing provided by the Human Operator Simulator.

Other widely used integrative architectures developed primarily for design purposes include (Pew & Mavor, 1998): COGNET (COGnition as a NETwork of tasks), which is used for building user models for intelligent interfaces; MIDAS (Man–machine Integrated Design and Analysis System), developed to model human performance in the context of aviation; and OMAR (Operator Model Architecture), intended to evaluate the procedures of operators in complex systems. An array of integrative architectures is available, and they are continually being developed and revised. A designer must examine the specific details of each modeling architecture relative to her needs and concerns, and make an informed choice as to which is best to use for her specific purpose.

CONTROL THEORY MODELS

Control theory models have a long history of use in human factors (Jagacinski & Flach, 2003). They are specialized for certain tasks, such as piloting an aircraft, that require monitoring and controlling operations of complex systems. Control theory models view the operator as a control element in a closed-loop system (see Figure 19.5). They assume that operators approximate the characteristics of good electromechanical control systems, subject to the limitations inherent in human information processing. Early models of this kind were limited in what they could do; they were only useful for dynamic systems involving one or more manual control tasks. Now, we have comprehensive models that span the range of supervisory activities engaged in by the operators of a complex system.

FIGURE 19.5Closed-loop, control theory view of a human–machine system.

Some fundamental requirements have driven the development of comprehensive, multitask control theory models (Baron & Corker, 1989). First, a system model must represent the operators together with all of the nonhuman aspects of the system. Second, the cognitive and decision processes that characterize human performance in the complex system environment must be articulated clearly. Finally, communication between crew members and between operator and machine must be modeled, as should each crew member’s mental model about the state of the system, goals, and so on.

As one example, the Procedure-Oriented Crew Model (PROCRU) was developed to evaluate the effects of changes in system design and procedures on the safety of landing approaches of aircraft (Baron & Corker, 1989; Baron et al., 1990). This application illustrates how the control-theoretic approach can help the designers develop comprehensive models of very complex systems. PROCRU is a closed-loop system that has separate models for the air-traffic controller, landing aids provided by the air-traffic control system, the aircraft, and crew members (a pilot and co-pilot; Vidulich, Tsang, & Flach, 2016).

The pilot models are, like the other components of the system, based on a control-theoretic information-processing structure. The pilots are assumed to have a set of tasks, or procedures, to be performed. The selection of the particular procedure to perform initially and when the previous one has been completed is based on the “expected gain” associated with performing each remaining task. Expected gain is a function of task priorities established by the flight mission and an estimate of the perceived urgency of performing particular tasks. When a procedure is chosen for execution, no other procedure will be considered during the time required to accomplish the chosen task.

PROCRU and other comprehensive control theory models do more than just predict how fast or accurate a person will be. They produce dynamic output: a continuous simulation of how the system will function over time. The simulation will vary as the representation of the situation evolves (Baron et al., 1990). Although we know that some aspects of control theory models work well in many contexts, we cannot say that they are true explanations of the way that complex systems operate over time. We have no empirical validation for comprehensive models such as PROCRU, and so we cannot say that they accurately represent what happens in the course of system operation.

FORENSIC HUMAN FACTORS

The decisions we make as designers, whether they are based on data or something else, determine the usability and safety of the final product our company sells. When something goes wrong, if people using the product get hurt, and if the human factors expert has been involved in the product design, then he or she must share the responsibility for design imperfections. Even human factors experts who were not involved in the development of a product may be asked to evaluate the product and its development process to determine what went wrong. The involvement of human factors considerations in the legal system is called forensic human factors and ergonomics (Dror, 2013; Noy & Karwowski, 2005).

LIABILITY

An organization is responsible for the safety of many people. Primarily, these are the people who use the products or services produced by the organization and the workers who are employed by the organization. If an organization fails to meet this responsibility, it can be held liable in a court of law. Thus, an organization must maintain safe practices and ensure that these practices can be justified if someone calls them into question.

In the U.S., safety in the workplace is governed by the directives of the Occupational Safety and Health Administration (OSHA) and the National Institute for Occupational Safety and Health (NIOSH). Some of the guidelines we discussed in Chapter 16 were determined by OSHA and NIOSH. OSHA was created by the passage of the Occupational Safety and Health Act of 1970 to ensure a safe work environment. It is responsible for safety and health regulations, and required employers to reduce workplace hazards and implement safety and health programs that informed and trained their employees (www.osha.gov/). An organization that knowingly or unknowingly violates these standards may be subject to citations and fines, which are leveled by OSHA.

NIOSH was established in conjunction with OSHA to provide the research and information on which the OSHA regulations are based, as well as education and training in occupational safety and health. Human factors specialists contribute to the development and evaluation of the standards. Human factors specialists also devise safety and training guidelines that keep the employer in compliance with OSHA regulations and ensure that employees will follow the safety procedures.

When an employee (or visitor) is injured or dies in an organization’s workplace, the organization may be responsible. This responsibility also extends to the people outside of the organization who bought or sold the organization’s products or services. If the organization is responsible, the law says that the organization has been negligent. Negligence is either criminal or civil. Criminal negligence occurs when the organization willfully violates the laws established to ensure safe products and safe work environments.

If the organization has not been criminally negligent, it may nonetheless have breached its civil responsibility (“duty of care”) to its employees or customers. The law distinguishes between product liability (Wardell, 2005) and service liability cases; both arise from the failure of performance, in the first case of a product and in the second case of a person. When someone is injured as a result of such failures, he or his family may undertake litigation to prove negligence and, if appropriate, get compensation for his losses. The law will decide negligence by evaluating whether “reasonable care” was taken in the design and maintenance of products and equipment (Cohen & LaRue, 2006).

A now-famous skit aired on Saturday Night Live in 1976. It starred Dan Akroyd as sleazy toy manufacturer “Irwin Mainway,” who attempted to justify the extreme danger of children’s toys like “Bag O’ Glass” and “Johnny Switchblade” to an incredulous consumer advocate (played by Candice Bergen). The skit is funny even decades later, because the product (a doll with spring-loaded knives under his arms) was obviously inappropriate for its intended users, regardless of the wild justifications offered by Mr. Mainway. Such mismatches between product design and user capabilities create hazards, risks, and dangers. A hazard is a situation for which there is the potential for injury or death; risk is the probability of injury or death occurring; danger is the combination of hazard and risk. A danger exists when there is a hazard for which there is a significant risk probability. Hence, we see immediately that “Bag O’ Glass” and “Johnny Switchblade” are unreasonably dangerous toys.

Unlike “Johnny Switchblade,” it is usually very difficult to determine whether a product is unreasonably dangerous. High frequency of injury does not necessarily mean that the risk associated with a product is unreasonable, nor does absence of injury indicate that the risk is reasonable (Statler, 2005). For example, chainsaws are dangerous, but at least some of the risk associated with their use is inherent in the product itself. Several criteria that form the basis of the “unreasonable danger” test are as follows (Weinstein, Twerski, Piehler, & Donaher, 1978):

1.The usefulness and functionality of the product.

2.The availability of similar but safer products that serve the same purposes.

3.The likelihood and seriousness of injury.

4.How obvious the danger is.

5.Common knowledge and normal public expectation of the danger.

6.Whether injury can be avoided by being careful in use of the product (including the effect of instructions or warnings).

7.Whether the product could be redesigned without impairing the usefulness of the product or making it too expensive.

Standards, such as those published by the American National Standards Institute (ANSI), make contractual agreements about and identification of mass-produced products possible. Standards are intended to guarantee uniformity in mass-produced goods, and safety is only one of many concerns that published standards are intended to address. However, adherence to published safety standards is not sufficient to ensure a safe product and does not absolve a manufacturer of liability. ANSI and the courts regard standards as only the minimum requirements for a reasonable product. The criteria for the standards may be outdated, standards published by different institutions may be inconsistent, the risk allowed by the standards may still be significant, and many aspects of product design will not be covered by standards. Generally, standards may not be good enough, and the time and money spent trying to meet minimum requirements set forth in the standards could be better spent in research and design (Peters, 1977).

An example case study illustrating the inadequacy of industry standards is seen in a certain kind of riding mower accidents (Statler, 2005). A significant number of accidents with riding mowers involve backing over someone, usually a child, while the mower blade is moving. These accidents occur because a small child is difficult to see behind the mower, and the driver also needs to look forward at the mower controls while backing up. In the 1970s, the U.S. Consumer Product Safety Commission (CPSC) urged manufacturers to stop the blade when the mowers are put in reverse, and a few companies added no-mow-in-reverse devices to their riding lawnmowers. Even though this safer design was economically feasible, most companies did not make the change, and industry standards have not been modified to require the design change. Stuart Statler (2005), former Commissioner of the CPSC, notes, “Not unexpectedly, the industry standard for riding mowers in effect over the past two decades represents, more or less, a low point for safety for a product that, by its very nature, engendered the highest degree of risk to life, namely, severe injury or death from backover” (p. 25).

The “reasonable danger” test was refashioned by Weinstein et al. (1978) into a set of criteria that a designer can apply to ensure that the danger is reasonable:

1.Delineate the scope of product uses.

2.Identify the environments within which the product will be used.

3.Describe the user population.

4.Postulate all possible hazards, including estimates of probability of occurrence and seriousness of resulting harm.

5.Delineate all alternative design features or production techniques, including warnings and instructions, that can be expected to effectively mitigate or eliminate the hazards.

6.Evaluate such alternatives relative to the expected performance standards of the product, including the following:

a.Other hazards that may be introduced by the alternatives.

b.Their effect on the subsequent usefulness of the product.

c.Their effect on the ultimate cost of the product.

d.A comparison to similar products.

7.Decide which features to include in the final design. (p. 140)

When an injury occurs for which a product is implicated as a possible cause, a legal complaint may be filed in civil court. The plaintiff (victim) must not only prove that the product was the likely cause of her injury, but she must also establish that a legal responsibility to the consumer was not met by the manufacturer of the product. A manufacturer may fail to meet her or his legal responsibilities in one of three ways: negligence, strict liability, or breach of warranty (Moll, Robinson, & Hobscheid, 2005). Negligence, as we have been discussing, is focused on the behavior of the defendant (the manufacturer), in that the defendant failed to take reasonable actions that would have prevented the accident. If the defendant is accused of engaging in reckless and wanton misconduct, the charge is one of gross negligence (which may also be criminal negligence).

Strict liability focuses on the product and not the defendant. Although the manufacturer need not have been in any way negligent, the manufacturer can be held liable for any product defect if that defect was the cause of the injury. Under strict liability, it is not only the manufacturer that can be held liable. The manufacturer must have sold the product either to the plaintiff or to one of many members of a distributive chain, all of whom may be named as defendants in the trial (Weinstein et al., 1978): (1) the producer of the raw material; (2) the maker of a component part; (3) the assembler or subassembler; (4) the packager of the final product; (5) the wholesaler, distributor, or middleman; (6) the person who holds the product out to be his or her own; and (7) the retailer. Any or all of these members can be held liable if it can be proven that a product was defective when it left their possession.

Breach of warranty occurs when a product fails to function as the defendant stated it would. Express warranty is an explicit statement in an oral or written contract. Implied warranties are not explicitly stated but are ones that a person could reasonably infer, for example, in the advertised uses of a product or in the product’s name. For example, the drug Rogaine, marketed as a baldness cure in the U.S. and other countries, is sold under the name Regaine in the U.K. Under U.S. product liability law, the name Regaine provides an express warranty that the product will cure baldness (if you use it you will “regain” your hair).

EXPERT TESTIMONY

Human factors specialists are called upon in the development of a product or system to improve the product and so reduce a manufacturer’s risk of liability. They may also be hired to provide expert testimony during litigation about human limitations and the product in question (Cohen & LaRue, 2006). In the role of an expert witness, the forensic human factors consultant will first be contacted by an attorney, either the plaintiff’s or the defendant’s. The consultant must make sure that the issues involved in the case fall within her areas of expertise, that she has no apparent conflict of interest, and that she will be able to work with the attorney (Hess, 2005). After she and the attorney reach an agreement, she will examine all of the information in the case to determine the relevant facts (Askren & Howard, 2005). She will also inspect the product or the location where the accident or injury occurred. She may also need to conduct some research. This could involve reading standards, guidelines, and relevant scientific literature, and possibly even conducting an experiment.

After all this, the consultant will write a report for the attorney, in which she integrates the information from all these different sources and summarizes her opinions relevant to the case. If the consultant is called to provide testimony as an expert witness, this will take place in two stages. First, she provides her opinion and answers questions from the opposing attorney in a deposition. If the consultant’s evidence is strong, the case will often end here, because the opposing attorney will be unwilling to let a jury listen to evidence detrimental to his client’s case. If it does not end with the deposition, the consultant will have to testify in court in front of a jury, answering questions posed both by her attorney and by the opposing attorney.

An example case of some notoriety involved incidents of “unintended acceleration” of the Audi 5000 automobile. As we discussed in Chapter 15, unintended acceleration incidents in vehicles with automatic transmissions have been reported since the 1940s (Schmidt, 1989). Such incidents are relatively rare and are not limited to any particular make or model. However, in the late 1980s, a number of people charged that the Audi 5000 was involved in an unusually high number of unintended acceleration accidents.

These charges peaked as a result of an incident in February 1986 in which a woman driving an Audi 5000 struck and killed her 6-year-old son when the car accelerated out of control. Her lawsuit was filed in April of that year, in which she claimed that a design defect in the Audi transmission was the cause of the unintended acceleration. The case received considerable media coverage, culminating in November with an expose on the CBS investigative reporting program, “60 Min.” Following this program, a flood of claims were made alleging incidents of unintended acceleration involving the Audi 5000.

The litigation against Audi’s parent company, Volkswagen of America, proceeded in two phases (Huber, 1991). In the first phase, the plaintiffs insisted that there was a flaw in the Audi transmission, as in the initial case described above. However, the evidence overwhelmingly indicated that the unintended acceleration was due to foot placement errors and not mechanical failure, leading the jury in the initial case to return a verdict in favor of the defendants in June, 1988 (Baker, 1989). At that point, at least one plaintiff returned to court, charging that the sudden acceleration was in fact due to foot placement errors that would not have occurred if the pedals had been designed differently.

Many product liability cases hinge on human capabilities and design, and this is apparent in the case of the Audi 5000. In the first phase of Audi’s litigation, the human factors expert could have testified how likely it was that an instance of unintended acceleration was due to an undetected foot placement error. This same testimony, together with information about the sizes and locations of the pedals, could have been used by opposing counsel during the second phase of Audi’s litigation. However, to make such a case against Audi, it would have to be shown that unintended acceleration incidents were in fact greater for the Audi than for other automobiles and that the pedals were placed in such a way that the likelihood of foot placement errors was greater in the Audi than for other automobiles. Because neither of these claims is true, the defense could use the testimony of a human factors specialist to prevent Audi from being unjustly found negligent.

Despite the fact that Audi was not found negligent in any of the cases, and evidence has indicated that virtually all instances of unintended acceleration are due to the driver mistakenly stepping on the gas pedal, unintended acceleration cases continue to be filed in courts. On August 6, 2006, a jury awarded $18 million to the driver of an SUV who charged that a defective speed control system was responsible for her crash on an interstate highway (Alongi & Davis, 2006). The high stakes involved for automobile manufacturers and plaintiffs in such cases suggest that human factors experts will continue to play an important role in product liability cases.

The issues described in the Audi 500 case are illustrative of the types of questions that arise in legal proceedings for which the testimony of a human factors specialist may be of value. During litigation, a human factors specialist can provide information pertinent to the following questions:

1.Was the product design, service, or process appropriate for the knowledge, skills, and abilities (KSAs) to be expected of normally functioning users (or clients) in the expected operational environment?

2.If not, could the service or the product design have been modified so that it would have been appropriate to the KSA of the anticipated user population?

3.If there was less than an optimal match between product design and the KSA of the expected user population, was an attempt made to modify the user population KSA by adequate selection procedures and/or by providing appropriate information by means of adequate training, instructions, and warnings?

4.If not, was it technically feasible to have provided such selection procedures and/or information transfer?

5.If [testimony indicates] that the information provided was not appropriate to the idiosyncrasies of the injured party, was it technically possible for the design of the product, selection, and/or information exchange to have been altered to accommodate those idiosyncrasies? (Kurke, 1986, p. 13)

Because litigation is adversarial, attorneys for both the defendant and the plaintiff are legally obligated to use any (ethical) means possible to win the case for their clients. Consequently, rendering expert opinion is rarely a pleasant experience. During cross examination, the human factors expert may be subjected to what is essentially a personal attack. The expert will be called upon to defend his credentials and the basis of his opinion. He will be asked misleading questions and may have his or her testimony restricted to exclude possibly relevant information on the basis of opposing counsel’s objections.

The expert witness is in a position of authority regarding the issues on which she testifies. The expert witness is also paid, often lavishly, for her time by one of the interested parties. The combination of unquestioned authority and monetary compensation puts the expert witness, as well as the field of human factors, in a position where professional and scientific integrity come into question. For this reason, the Human Factors and Ergonomics Society has a section on principles of Forensic Practice in its code of ethics (Human Factors and Ergonomics Society, 2005). These principles outline behaviors that ensure that the expert witness is unbiased and not motivated by personal gain, that the witness adheres to high scientific and personal standards, and that the witness does not abuse her position of authority and so damage the reputation of the human factors profession.

HUMAN FACTORS AND SOCIETY

As the field of human factors emerged from World War II, its emphasis was on the “lights and buttons” systems so often encountered in the military. Since that time, the field has rapidly expanded. It now includes a wide range of domains covering both the military and the private sectors. Many forces have led to the rise of human factors, the most compelling being the rapid growth of high technology systems in which human performance is often the variable that limits the performance of the system. With each new technological development, a host of specific human factors issues arise that are unique to that technology, though the basic and applied principles of human performance acquired through years of research remain applicable. Other pressures that have led to increasing emphasis on human factors include greater concern with workers’ health and safety, demands from consumers for products that are easier to use, and the financial benefits that arise from improvements in the match between the human and the machine.

As the field of human factors and ergonomics has grown, the range of disciplines that interact to form its knowledge base has also grown. Participants in the field include graduates not only of human factors programs, but also from such fields as psychology, industrial engineering, civil engineering, biomechanics, physiology, medicine, cognitive sciences, machine intelligence, computer science, anthropology and sociology, and education. The highly interdisciplinary nature of the profession encourages communication across discipline boundaries. Such interdisciplinary communication provides a basis for fundamental advances in scientific understanding that contribute to society through more usable, safer products and services.

An immediate application of human factors research is the design of equipment and environments for the very young, the aged, and the handicapped. In recent years, our society has become more aware of the challenges that face such special populations. One challenge to human factors is to improve the quality of life for these populations through designs that allow them to attain personal goals and fulfillment with the same ease as those not so challenged. Human factors experts have a responsibility to see that products intended for use by special populations are more than just modifications of products designed for the population at large.

With the development of the Internet and the World Wide Web, and the many computer-mediated activities in which we now engage, we often talk about a concept of “universal design” (Stephanidis & Akoumianakis, 2011). Universal design ensures that anyone will have access to information and services at all places and at all times. One goal of the proponents of universal design is the development of a code of practice. This code of practice is intended to see that considerations of usability during product and system development are not restricted to just an average, able user but to the larger population of users of diverse abilities. In advocating that systems should work well for all users, Vanderheiden (2005) emphasizes, “Web content that is more usable by individuals who have disabilities is also more usable by individuals with mobile technologies and, often, more understandable and usable by all users” (p. 281). We can make this claim for virtually all aspects of product and system design.

Because the forces that led to the founding and expansion of the human factors profession continue to exert their influence, human factors will continue to grow. Moreover, since technology is moving forward in leaps and bounds, providing us with new, complex machines whose effective use requires that we be able to interact with them intuitively and naturally, there will continue to be new frontiers for application of knowledge concerning human factors and ergonomics. Our efforts to provide a better integration of the basic facts of human performance and the applied concerns of system and product designers emphasize how usability engineering is a fundamental component of any design process.

RECOMMENDED READINGS

Bias, R. G., & Mayhew, D. J. (Eds.) (2005). Cost-Justifying Usability: An Update for the Internet Age. San Francisco, CA: Morgan Kaufman.

Card, S. K., Moran, T. P., & Newell, A. (1983). The Psychology of Human-Computer Interaction. Hillsdale, NJ: Erlbaum.

Gluck, K. A., & Pew, R. W. (Eds.) (2005). Modeling Human Behavior with Integrated Cognitive Architectures: Comparison, Evaluation, and Validation. Mahwah, NJ: Erlbaum.

Meister, D., & Enderwick, T. P. (2002). Human Factors in System Design, Development, and Testing. Mahwah, NJ: Erlbaum.

Noy, Y. I., & Karwowski, W. (Eds.) (2005). Handbook of Human Factors in Litigation. Boca Raton, FL: CRC.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.240.242