9
CLUMSY USE OF TECHNOLOGY

TECHNOLOGY CHANGE TRANSFORMS OPERATIONAL AND COGNITIVE SYSTEMS

There are several possible motivations for studying an operational system in relation to the potential for error and failure. The occurrence of an accident or a near miss is a typical trigger for an investigation. Cumulated evidence from incident data bases may also provide a trigger to investigate “human error.”

Another important trigger for examining the potential for system breakdown is at points of major technology change. Technology change is an intervention into an ongoing field of activity (Winograd and Flores, 1987; Flores, Graves, Hartfield, and Winograd, 1988; Carroll, Kellogg, and Rosson, 1991). When developing and introducing new technology, one should realize that the technology change represents new ways of doing things; it does not preserve the old ways with the simple substitution of one medium for another (e.g., paper for computer-based).

Technological change is, in general, transforming the workplace through the introduction and spread of new computer-based systems (Woods and Dekker, 2000). First, ubiquitous computerization has tremendously advanced our ability to collect, transmit, and transform data. In all areas of human endeavor, we are bombarded with computer-processed data, especially when anomalies occur. But our ability to digest and interpret data has failed to keep pace with our abilities to generate and manipulate greater and greater amounts of data. Thus, we are plagued by data overload.

Second, user interface technology has allowed us to concentrate this expanding field of data into one physical platform, typically a single visual display unit (VDU). Users are provided with increased degrees of flexibility for data handling and presentation in the computer interface through window management and different ways to display data. The technology provides the capability to generate tremendous networks of computer displays as a kind of virtual perceptual field viewable through the narrow aperture of the VDU. These changes affect the cognitive demands and processes associated with extracting meaning from large fields of data.

Third, heuristic and algorithmic technologies expand the range of subtasks and cognitive activities that can be automated. Automated resources can, in principle, offload practitioner tasks. Computerized systems can be developed that assess or diagnose the situation at hand, alerting practitioners to various concerns and advising practitioners on possible responses. These “intelligent” machines create joint cognitive systems that distribute cognitive work across multiple agents. Automated and intelligent agents change the composition of the team and shift the human’s role within that cooperative ensemble (see Hutchins, 1995a, 1995b for treatments of how cognitive work in distributed across agents).

One can guard against the tendency to see automation in itself as a cure for “human error” by remembering this syllogism (Woods and Hollnagel, 2006, p. 176):

image All cognitive systems are finite (people, machines, or combinations). All finite cognitive systems in uncertain changing situations are fallible. Therefore, machine cognitive systems (and joint systems across people and machines) are fallible.

We usually speak of the fallibility of machine “intelligence” in terms of brittleness – how machine performance breaks down quickly on problems outside its area of competence (cf., Roth et al., 1987; Guerlain et al., 1996). The question, then, is not the universal fallibility or finite resources of systems, but rather the development of strategies that handle the fundamental tradeoffs produced by the need to act in a finite, dynamic, conflicted, and uncertain world.

Fourth, computerization and automation integrate or couple more closely together different parts of the system. Increasing the coupling within a system has many effects on the kinds of cognitive demands practitioners face. For example, with higher coupling, actions produce more side effects. Fault diagnosis becomes more difficult as a fault is more likely to produce a cascade of disturbances that spreads throughout the monitored process. Increased coupling creates more opportunities for situations to arise with conflicts between different goals.

Technology change creates the potential for new kinds of error and system breakdown as well as changing the potential for previous kinds of trouble. Take the classic simple example of the transition from an analog alarm clock to a digital one. With the former, errors are of imprecision – a few minutes off one way or another. With the advent of the latter, precision increases, but it is now possible for order-of-magnitude errors where the alarm is set to sound exactly 12 hours off (i.e., by confusing PM and AM modes). “Design needs to occur with the possibility of error in mind” (Lewis and Norman, 1986). Analysis of the potential for system breakdown should be a part of the development process for all technology changes.

This point should not be interpreted as part of a go/no go decision about new technology. It is not the technology itself that creates the problem; rather it is how the technological possibilities are utilized vis à vis the constraints and needs of the operational system. One illustration of the complex reverberations of technology change comes from this internal reflection on the impact of the new computer technology used in NASA’s new mission control in 1996 (personal communication, NASA Johnson Space Center, 1996):

We have much more flexibility in our how our displays look and in the layout of the displays on the screens. We also have the added capabilities that allow the automation of the monitoring of telemetry. But when something has advantages, it usually has disadvantages, and the new consoles are no exception. First, there is too much flexibility, so much stuff to play with that it can get to the point where adjusting stuff on the console distracts from keeping up with operations. The configuration of the displays, the various supporting applications, the ability to ‘channel surf’ on the TV, all lead to a lack of attention to operations. I have seen teams miss events or not hear calls on the loops because of being preoccupied with the console. I have also witnessed that when a particular application doesn’t work, operations were missed due to trying to troubleshoot the problem. And this was an application that was not critical to the operations in progress. … Second, there’s too much reliance on automation, mainly the Telemetry Monitor program. I’m concerned that it is becoming the prime (and sometimes sole) method for following operations. When the crew is taught to fly the arm, they are trained to use all sources of feedback, the D&C panel, the window views, multiple camera views, and the spec. When we ‘fly’ the console, we must do the same. This point was made very evident during a recent sim when Telemetry Monitor wasn’t functioning. It took the team awhile to notice that it wasn’t working because they weren’t cross checking and then once they realized it they had some difficulty monitoring operations. If this were to happen in flight it could, at a minimum, be embarrassing, and, at a maximum, lead to an incorrect failure diagnosis or missing a failure or worse – such as a loss of a payload. The solution to this problem is simple. We need to exercise judgment to prioritize tending to the console vs. following operations. If something is required right now, fix it or work around it. If it’s not required and other things are going on, let it wait.

This commentary by someone involved in coping with the operational effects of technology change captures a pattern found in research on the effects new levels of automation. New levels of automation transform operational systems. People have new roles that require new knowledge, new attentional demands, and new forms of judgment. Unfortunately, the NASA manager had one thing wrong – developing training to support the “judgment to prioritize” between doing the job versus tending to the interface has not proven to be a simple matter.

PATTERNS IN THE CLUMSY USE OF COMPUTER TECHNOLOGY

We usually focus on the perceived benefits of new automated or computerized devices and technological aids. Our fascination with the possibilities afforded by technology in general often obscures the fact that new computerized and automated devices also create new burdens and complexities for the individuals and teams of practitioners responsible for operating, troubleshooting, and managing high-consequence systems. The demands may involve new or changed tasks such as device setup and initialization, configuration control, or operating sequences. Cognitive demands change as well, creating new interface management tasks, new attentional demands, the need to track automated device state and performance, new communication or coordination tasks, and new knowledge requirements. These demands represent new levels and types of operator workload.

The dynamics of these new demands are an important factor because in complex systems human activity ebbs and flows, with periods of lower activity and more self-paced tasks interspersed with busy, high-tempo, externally paced operations where task performance is more critical. Technology is often designed to shift workload or tasks from the human to the machine. But the critical design feature for well integrated cooperative cognitive work between the automation and the human is not the overall or time-averaged task workload. Rather, it is how the technology impacts on low-workload and high-workload periods, and especially how it impacts on the practitioner’s ability to manage workload that makes the critical difference between clumsy and skillful use of the technological possibilities.

A syndrome, which Wiener (1989) has termed “clumsy automation,” is one example of technology change that in practice imposes new burdens as well as some of the expected benefits. Clumsy automation is a form of poor coordination between the human and machine in the control of dynamic processes where the benefits of the new technology accrue during workload troughs, and the costs or burdens imposed by the technology occur during periods of peak workload, high-criticality, or high-tempo operations. Despite the fact that these systems are often justified on the grounds that they would help offload work from harried practitioners, we find that they in fact create new additional tasks, force the user to adopt new cognitive strategies, require more knowledge and more communication at the very times when the practitioners are most in need of true assistance. This creates opportunities for new kinds of human error and new paths to system breakdown that did not exist in simpler systems.

To illustrate these new types of workload and their impact on practitioner cognition and collaboration let us examine two series of studies, one looking at pilot interaction with cockpit automation, and the other looking at physician interaction with new information technology in the operating room. Both series of studies found that the benefits associated with the new technology accrue during workload troughs, and the costs associated with the technology occur during high-criticality, or high-tempo operations.

CLUMSY AUTOMATION ON THE FLIGHT DECK

Results indicate that one example of clumsy automation can be seen in the interaction between pilots and flight management computers (FMCs) in commercial aviation. Under low-tempo operations pilots communicate instructions to the FMCs which then “fly” the aircraft. Communication between pilot and FMC occurs through a multi-function display and keyboard. Instructing the computers consists of a relatively effortful process involving a variety of keystrokes on potentially several different display pages and a variety of cognitive activities such as recalling the proper syntax or where data is located in the virtual display page architecture. Pilots speak of this activity as “programming the FMC.”

Cockpit automation is flexible also in the sense that it provides many functions and options for carrying out a given flight task under different circumstances. For example, the FMC provides at least five different mechanisms at different levels of automation for changing altitude. This customizability is construed normally as a benefit that allows the pilot to select the mode or option best suited to a particular flight situation (e.g., time and speed constraints). However, it also creates demands for new knowledge and new judgments. For example, pilots must know about the functions of the different modes, how to coordinate which mode to use when, and how to “bumplessly” switch from one mode or level of automation to another. In other words, the supervisor of automated resources must not only know something about how the system works, but also know how to work the system. Monitoring and attentional demands are also created as the pilots must keep track of which mode is active and how each active or armed mode is set up to fly the aircraft.

In a series of studies on pilot interaction with this suite of automation and computer systems, the data revealed aspects of cockpit automation that were strong but sometimes silent and difficult to direct when time is short. The data showed how pilots face new challenges imposed by the tools that are supposed to serve them and provide “added functionality.” For example, the data indicated that it was relatively easy for pilots to lose track of the automated systems’ behavior during high-tempo and highly dynamic situations. Pilots would miss mode changes that occurred without direct pilot intervention during the transitions between phases of flight or during the high-workload descent and approach phases in busy airspace. These difficulties with mode awareness reduced pilots’ ability to stay ahead of the aircraft.

Pilots develop strategies to cope with the clumsiness and complexities of many modern cockpit systems. For example, data indicate that pilots tend to become proficient or maintain their proficiency on a subset of modes or options. As a result, they try to manage the system within these stereotypical responses or paths, underutilizing system functionality. The results also showed that pilots tended to abandon the flexible but complex modes of automation and switch to less automated, more direct means of flight control, when the pace of operations increased (e.g., in crowded terminal areas where the frequency of changes in instructions from air traffic control increase). Note that pilots speak of this tactic as “escaping” from the automation.

From this and other research, the user’s perspective on the current generation of automated systems is best expressed by the questions they pose in describing incidents (Wiener, 1989):

image What is it doing now?

image What will do next?

image How did I get into this mode?

image Why did it do this?

image Stop interrupting me while I am busy.

image I know there is some way to get it to do what I want.

image How do I stop this machine from doing this?

image Unless you stare at it, changes can creep in.

These questions and statements illustrate why one observer of human-computer interaction defined the term agent as “A computer program whose user interface is so obscure that the user must think of it as a quirky, but powerful, person” (Lanir, 1995, p. 68, as quoted in Woods and Hollnagel, 2006, p. 120). In other words, the current generation of cockpit automation contains several classic human-computer cooperation problems, e.g., an opaque interface.

Questions and statements like these point to automation surprises, that is, situations where crews are surprised by actions taken (or not taken) by the automated system. Automation surprises begin with miscommunication and misassessments between the automation and users which lead to a gap between the user’s understanding of what the automated systems are set up to do, what they are doing, and what they are going to do. The initial trigger for such a mismatch can arise from several sources, for example, erroneous inputs such as mode errors or indirect mode changes where the system autonomously changes states based on its interpretation of pilot inputs, its internal logic and sensed environmental conditions. The gap results from poor feedback about automation activities and incomplete mental models of how it functions. Later, the crew is surprised when the aircraft’s behavior does not match the crew’s expectations. This is where questions like, “Why won’t it do what I want?” “How did I get into this mode?” arise.

When the crew is surprised, they have detected the gap between expected and actual aircraft behavior, and they can begin to respond to or recover from the situation. The problem is that this detection generally occurs when the aircraft behaves in an unexpected manner – flying past the top of descent point without initiating the descent, or flying through a target altitude without leveling off. In other words, the design of the pilot-automation interface restricts the crews’ ability to detect and recover from the miscoordination. If the detection of a problem is based on actual aircraft behavior, it may not leave a sufficient recovery interval before an undesired result occurs (low error-tolerance). Unfortunately, there have been accidents where the misunderstanding persisted too long to avoid disaster.

CLUMSY AUTOMATION IN THE OPERATING ROOM: 1 – CENTRALIZING DATA DISPLAY

Another study, this time in the context of operating room information systems, reveals some other ways that new technology creates unintended complexities and provokes practitioner coping strategies (Cook and Woods, 1996b). In this case a new operating room patient monitoring system was studied in the context of cardiac anesthesia. This and other similar systems integrate what was previously a set of individual devices, each of which displayed and controlled a single sensor system, into a single CRT display with multiple windows and a large space of menu-based options for maneuvering in the space of possible displays, options, and special features. The study consisted of observing how the physicians learned to use the new technology as it entered the workplace.

By integrating a diverse set of data and patient monitoring functions into one computer-based information system, designers could offer users a great deal of customizability and options for the display of data. Several different windows could be called depending on how the users preferred to see the data. However, these flexibilities all created the need for the physician to interact with the information system – the physicians had to direct attention to the display and menu system and recall knowledge about the system. Furthermore, the computer keyhole created new interface management tasks by forcing serial access to highly inter-related data and by creating the need to periodically declutter displays to avoid obscuring data channels that should be monitored for possible new events.

The problem occurs because of a fundamental relationship: the greater the trouble in the underlying system or the higher the tempo of operations, the greater the information processing activities required to cope with the trouble or pace of activities. For example, demands for monitoring, attentional control, information, and communication among team members (including human-machine communication) all tend to go up with the tempo and criticality of operations. This means that the burden of interacting with the display system tends to be concentrated at the very times when the practitioner can least afford new tasks, new memory demands, or diversions of his or her attention away from patient state to the interface per se.

The physicians tailored both the system and their own cognitive strategies to cope with this bottleneck. In particular, they were observed to constrain the display of data into a fixed spatially dedicated default organization rather than exploit device flexibility. They forced scheduling of device interaction to low criticality self-paced periods to try to minimize any need for interaction at high workload periods. They developed stereotypical routines to avoid getting lost in the network of display possibilities and complex menu structures.

CLUMSY AUTOMATION IN THE OPERATING ROOM: 2 – REDUCING THE ABILITY FOR RECOVERY FROM ERROR OR FAILURE

This investigation started with a series of critical incidents involving physician interaction with an automatic infusion device during cardiac surgery. The infusion controller was a newly introduced computer-based device used to control the flow of blood pressure and heart rate medications to patients during heart surgery. Each incident involved delivery of a drug to the patient when the device was supposed to be off or halted. Detailed debriefing of participants suggested that, under certain circumstances, the device would deliver drug (sometimes at a very high rate) with little or no evidence to the user that the infusion was occurring. A series of investigations were done including observation of device use in context to identify:

image characteristics of the device which make its operation difficult to observe and error prone and,

image characteristics of the context of cardiac anesthesiology which interact with the device characteristics to provide opportunities for unplanned delivery of drug (Cook et al., 1992; Moll van Charante et al., 1993).

In cardiac surgery, the anesthesiologist monitors the patient’s physiological status (e.g., blood pressure, heart rate) and administers potent vasoactive drugs to control these parameters to desired levels based on patient baselines, disease type, and stage of cardiac surgery. The vasoactive drugs are administered as continuous infusion drips mixed with intravenous fluids. The device in question is one type of automatic infusion controller that regulates the rate of flow. The user enters a target in terms of drops per minute, the device counts drops that form in a drip chamber, compares this to the target, and adjusts flow. If the device is unable to regulate flow or detects one of several different device conditions, it is programmed to cease operation and emit an audible alarm and warning message. The interface controls consist of three multi-function buttons and a small LCD panel which displays target rate and messages. In clinical use in cardiac surgery up to six devices may be set up with different drugs that may be needed during the case.

The external indicators of the device’s state provide poor feedback and make it difficult for physicians to assess or track device behavior and activities. For example, the physician users were unaware of various controller behavioral characteristics such as overshoot at slow target rates, “seek” behavior, and erratic control during patient transport. Alarms were remarkably common during device operation. The variety of different messages were ambiguous – several different alarm messages can be displayed for the same underlying problem; the different messages depend on operating modes of the device which are not indicated to the user. Given the lack of visible feedback, when alarms recurred or a sequence occurred, it was very difficult for the physician to determine whether the device had delivered any drug in the intervening period.

The most intense periods of device use also were those time periods of highest cognitive load and task criticality for the physicians, that is, the time period of coming off cardio-pulmonary bypass. It is precisely during these periods of high workload that the automated devices are supposed to provide assistance (less user workload through more precise flows, smoother switching between drip rates, and so on). However, this was also the period where the largest number of alarms occurred and where device troubleshooting was most onerous.

Interestingly, users seemed quite aware of the potential for error and difficulties associated with device setup which could result in the device not working as intended when needed. They sought to protect themselves from these troubles in various ways, although the strategies were largely ineffective.

In the incidents, misassemblies or device problems led to inadvertent drug deliveries. The lack of visible feedback led physicians to think that the device was not delivering drug and was not the source of the observed changes in patient physiology. Large amounts of vasoactive drugs were delivered to brittle cardiovascular systems, and the physicians were unable to detect that the infusion devices were the source of the changes. Luckily in all of the cases, the physicians responded appropriately to the physiological changes with other therapies and avoided any adverse patient outcomes. Only later did the physicians realize that the infusion device was the source of the physiological changes. In other words, these were cases of automation surprise.

The investigations revealed that various device characteristics led to an increased potential for misassessments of device state and behavior. These characteristics played a role in the incidents because they impaired the physician’s ability to detect and recover from unintended drug administrations. Because of these effects, the relevant characteristics of the device can be seen as deficiencies from a usability point of view; the device design is “in error.”

The results of this series of studies directly linked, for the same device and context, characteristics of computerized devices to increased potential for erroneous actions and impaired ability to detect and recover from errors or failures. Furthermore, the studies directly linked the increased potential for erroneous setup and the decreased ability to detect errors as important contributors to critical incidents.

THE IMPACT OF CLUMSY AUTOMATION ON COGNITION AND COLLABORATION

There are some important patterns in the results from the above studies and others like them. One is that characteristics of computer-based devices and systems affect the potential for different kinds of erroneous actions and assessments. Characteristics of computer-based devices that influence cognition and behavior in ways that increase the potential for erroneous actions and assessments can be considered flaws in the joint human-computer cognitive system that can create operational problems for people at the sharp end.

A second pattern is that the computer medium shapes the constraints for design. In pursuit of the putative benefits of automation, user customizability, and interface configurability, it is easy for designers to unintentionally create a thicket of modes and options, to create a mask of apparent simplicity overtop of underlying device or interface complexity, to create a large network of displays hidden behind a narrow keyhole.

A result that occurred in all the above studies is that practitioners actively adapted or tailored the information technology provided for them to the immediate tasks at hand in a locally pragmatic way, usually in ways not anticipated by the designers of the information technology. Tools are shaped by their users.

New technology introduced for putative benefits in terms of human performance in fact introduced new demands and complexities into already highly demanding fields of practice. Practitioners developed and used a variety of strategies to cope with these new complexities. Because practitioners are responsible agents in the domain, they work to insulate the larger system from device deficiencies and peculiarities of the technology. This occurs, in part, because practitioners inevitably are held accountable for failure to correctly operate equipment, diagnose faults, or respond to anomalies even if the device setup, operation, and performance are ill-suited to the demands of the work environment.

In all of these studies practitioners tailored their strategies and behavior to avoid problems and to defend against device idiosyncrasies. However, the results also show how these adaptations were only partly successful. The adaptations could be effective, or only locally adaptive, in other words, brittle to various degrees (i.e., useful in narrow contexts, but problematic in others).

An underlying contributor to the above problems in human-automation coordination is the escalation principle. There is a fundamental relationship where the greater the trouble in the underlying process or the higher the tempo of operations, the greater the information processing activities required to cope with the trouble or pace of activities. For example, demands for monitoring, attentional control, information, and communication among team members (including human-machine communication) all tend to go up with the unusualness (situations at or beyond margins of normality or beyond textbook situations), tempo and criticality of situations. If there are workload or other burdens associated with using a computer interface or with interacting with an autonomous or intelligent machine agent, these burdens tend to be concentrated at the very times when the practitioner can least afford new tasks, new memory demands, or diversions of his or her attention away from the job at hand to the interface per se. This is the essential trap of clumsy automation.

Finally, it would be easy to label the problems noted above as simply “human-computer interaction deficiencies.” In some sense they are exactly that. But the label “human-computer interaction” (HCI) carries with it many different assumptions about the nature of the relationship between people and technology. The examples above illustrate deficiencies that go beyond the concepts typically associated with the label computer interface in several ways.

First, all of these devices more or less meet guidelines and common practices for human-computer interaction defined as simply making the needed data nominally available, legible, and accessible. The characteristics of the above systems are problems because of the way that they shape practitioner cognition and collaboration in their field of activity. These are not deficiencies in an absolute sense; whether or not they are flaws depends on the context of use.

Second, the problems noted above cannot be seen without understanding device use in context. Context-free evaluations are unlikely to uncover the important problems, determine why they are important, and identify criteria that more successful systems should meet.

Third, the label HCI easily conjures up the assumption of a single individual alone, rapt in thought, but seeing and acting through the medium of a computerized device. The cases above and the examples throughout this volume reveal that failures and successes involve a system of people, machine cognitive agents, and machine artifacts embedded in context. Thus, it is important to see that the deficiencies, in some sense, are not in the computer-based device itself. Yes, one can point to specific aspects of devices that contribute to problems (e.g., multiple modes, specific opaque displays, or virtual workspaces that complicate knowing where to look next), but the proper unit of analysis is not the device or the human. Rather, the proper unit of analysis is the distributed system that accomplishes cognitive work – characteristics of artifacts are deficient because of how they shape cognition and collaboration among a distributed set of agents. Clumsiness is not really in the technology. Clumsiness arises in how the technology is used relative to the context of demands, resources, agents, and other tools.

Today, most new developments are justified in part based on their presumed impact on human performance. Designers believe:

image automating tasks will free up cognitive resources,

image automated agents will offload tasks,

image automated monitors will diagnose and alert users to trouble,

image flexible data handling and presentation will allow users to tailor data to the situation,

image integrating diverse data onto one screen will aid situation assessment.

The result is supposed to be reduced workload, enhanced productivity, and fewer errors.

In contrast, studies of the impact of new technology on users, like those above, tell a very different story. We see an epidemic of clumsy use of technology, creating new burdens for already beleaguered practitioners, often at the busiest or most critical times. Data shows that:

image instead of freeing up resources, clumsy use of technology creates new kinds of cognitive work,

image instead of offloading tasks, autonomous, but silent machine agents create the need for team play, coordination and intent communication with people, demands which are difficult for automated systems to meet,

image instead of focusing user attention, clumsy use of technology diverts attention away from the job to the interface,

image instead of aiding users, generic flexibilities create new demands and error types,

image instead of reducing human error, clumsy technology contains “classic” flaws from a human-computer cooperation point of view; these create the potential for predictable kinds of erroneous actions and assessments by users,

image instead of reducing knowledge requirements, clumsy technology demands new knowledge and more difficult judgments.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.167.114