7
MINDSET

ATTENTIONAL DYNAMICS

Mindset is about attention and its control (Woods, 1995b). It is especially critical when examining human performance in dynamic, evolving situations where practitioners are required to shift attention in order to manage work over time. In all real world settings there are multiple signals and tasks competing for practitioner attention. On flight decks, in operating rooms, or shipboard weapon control centers, attention must flow from object to object and topic to topic. Sometimes intrusions into practitioner attention are distractions but other times they are critical cues that important new data is available (Klein, Pliske, et al., 2005). There are a host of issues that arise under this heading. Situation awareness, the redirection of attention amongst multiple threads of ongoing activity, the consequences of attention being too narrow (fixation) or too broad (vagabonding) – all critical to practitioner performance in these sorts of domains – all involve the flow of attention and, more broadly, mindset. Despite its importance, understanding the role of mindset in accidents is difficult because in retrospect and with hindsight investigators know exactly what was of highest priority when.

CASE 7.1 HYPOTENSION

During a coronary artery bypass graft procedure an infusion controller device used to control the flow of a sodium nitroprusside (SNP) to the patient delivered a large volume of drug at a time when no drug should have been flowing. Five of these microprocessor-based devices, each controlling the flow of a different drug, were set up in the usual fashion at the beginning of the day, prior to the beginning of the case. The initial part of the case was unremarkable. Elevated systolic blood pressure (>160 torr) at the time of sternotomy prompted the practitioner to begin an infusion of SNP. After starting the infusion at 10 drops per minute, the device began to sound an alarm. The tubing connecting the device to the patient was checked and a stopcock (valve) was found closed. The operator opened the stopcock and restarted the device. Shortly after restart, the device alarmed again. The blood pressure was falling by this time, and the operator turned the device off. Over a short period, hypertension gave way to hypotension (systolic pressure <60 torr). The hypotension was unresponsive to fluid challenge but did respond to repeated injections of neosynephrine and epinephrine. The patient was placed on bypass rapidly. Later, the container of nitroprusside was found to be empty; a full bag of 50 mg in 250 ml was set up before the case.

The physicians involved in the incident were comparatively experienced device users. Reconstructing the events after the incident led to the conclusion that the device was assembled in a way that would allow free flow of drug. Initially, however, the stopcock blocked drug delivery. The device was started, but the machine did not detect any flow of drug (because the stopcock was closed) and this triggered visual and auditory alarms. When the stopcock was opened, free flow of fluid containing drug began. The controller was restarted, but the machine again detected no drip rate, this time because the flow was a continuous stream and no individual drops were being formed. The controller alarmed again with the same message that had appeared to indicate the earlier no flow condition. Between opening the stopcock and the generation of the error message, sufficient drug was delivered to substantially reduce the blood pressure. The operator saw the reduced blood pressure, concluded that the SNP drip was not required and pushed the control button marked “off.” This powered down the device, but the flow of drug continued. The blood pressure fell even further, prompting a diagnostic search for sources of low blood pressure. The SNP controller was seen to be off. Treatment of the low blood pressure itself commenced and was successful.

We need to point out that a focus on “error” in this incident would distract our attention from the important issues the incident raises. The human performance during the incident was both flawed and exemplary. The failures in working with the infusion device contrast markedly with the successes in managing the complications. Even though they were unable to diagnose its source, the practitioners were quick to correct the physiologic, systemic threat, shifting their focus of attention. Their ability to shift from diagnosis to disturbance management was crucial to maintaining the system (Woods, 1988; Woods and Hollnagel, 2006, Chapter 8).

Mindset, where attention is focused and how it shifts and flows over time, is critical in this incident. There were many things happening in the world and it was impossible to attend to them all. But clearly the important thing that was happening went unregarded. That is to say attention flowed not to it but to other things.

Why the device did not receive attention? There are numerous contributors to the inability to observe and correct the unintended flow of drug via the infusion device. Some more obvious ones related to the device itself are:

1. the drip chamber was obscured by the machine’s sensor, making visual inspection difficult,

2. presence of an aluminum shield around the fluid bag, hiding its decreasing volume,

3. misleading alarm messages from the device,

4. presence of multiple devices making it difficult to trace the tubing pathways.

The practitioners reported that they turned the device off as soon as the blood pressure fell, at about the same moment that the device alarmed a second time. In their mindset, the device was off and unimportant. It was unregarded, unattended to. When we say it was unimportant we do not mean that it was examined and found to be uninvolved in the evolving situation. This would have required attention to flow to the device (at least briefly) so that inferences about its state could be made. The post-incident practitioner debriefing leads us to believe that they dismissed the device from their mindset once it was “off” – the device played no role in the flow of their attention from that point. The practitioners did not make inferences about how the device was working or not working or how it might be playing a role: they did not attend to the device at all. Once it was turned “off” it disappeared from practitioner attention, not becoming the focus of attention again until the very end of the sequence. The device was absent from the mindset of the practitioners. This is not to say that the practitioners were idle or inattentive, indeed they engaged in diagnostic search for the causes of low blood pressure and the management of that complication. It is definitely not the case that they were inattentive but rather that their mindset did not include the relevant object and its (mis)function.

It is easy to speculate on alternative sequences that might have occurred if circumstances had been slightly different. We are compelled to address the device design, its placement in the room, the other distractors that are present in such settings, and a host of other “obvious” problems. If the device had been more robustly designed, it might have not had this particular failure mode. If the users had experience with this particular form of device failure they might have retained the device in their mindset as a (potentially) active influence in the situation. If the situation had been less demanding, less fast paced, the meaning of the device “off” state might have been questioned. Here the “off” condition was indicated by a blank liquid crystal diode display screen. The “on-off” state was controlled by a push button that toggled the state from on to off and back, this might have included some uncertainty (e.g., that the device was powered down but that fluid flow was possible). But rather than focus on these issues, our attention should now turn to the issue of mindset, its construction, maintenance, strengths and weaknesses. How is attention directed by experience? How is mindset established? How do practitioners adjust their processes of attending to different stimuli based on their mindset?

The control of attention is an important issue for those trying to understand human performance, especially in event-rich domains such as flightdecks, operating rooms, or control centers. Attention is a limited resource. One cannot attend to more than one thing at a time, and so shifts of attention are necessary to be able to “take in” the ways in which the world is changing. When something in the world is found that is anomalous (what is sensed in the world is not consistent with what is expected by the observer) attention focuses on that thing and a process of investigation begins that involves other shifts of attention. This process is ongoing and has been described by Neisser as the perceptual or cognitive cycle (Neisser, 1976). It is a crucial concept for those trying to understand human performance because it is the basis for all diagnosis and action. Nothing can be discovered in the world without attention; no intended change in the world can be effected without shifting attention to the thing being acted upon. At least two kinds of human performance problems are based on attentional dynamics. The first is a loss of situation awareness and the second is psychological fixation.

“LOSS OF SITUATION AWARENESS”

Situation awareness is a label that is often used to refer to many of the cognitive processes involved in what we have called here attentional dynamics (Endsley, 1995; Sarter and Woods, 1991; Adams, Tenney, and Pew, 1995; Woods and Sarter, 2010). There have been many debates about what is situation awareness and attempts to measure it as a unitary phenomenon. For example, does situation awareness refer to a product or a process? It is not our intention here to engage in or outline a position in these debates. Here we are using the label situation awareness, since it is a commonly used expression, to point to the cognitive processes involved in the control of attention. Just a few of the cognitive processes that may be involved when one invokes the label of situation awareness are: control of attention (Gopher, 1991), mental simulation (Klein and Crandall, 1995), forming expectancies (Johnson, Grazioli, Jamal, and Zualkernan, 1992; Christoffersen, Woods and Blike, 2007), directed attention (Woods, 1995b), and contingency planning (Orasanu, 1990). Because the concept involves tracking processes in time, it can also be described as mental bookkeeping – keeping track of multiple threads of different but interacting sub-problems as well as of influences of the activities undertaken to control them (Cook, Woods and McDonald, 1991; Woods and Hollnagel, 2006).

Maintaining situation awareness necessarily requires shifts of attention between the various threads. It also requires more than attention alone, for the objective of the shifts of attention is to inform and modify a coherent picture or model of the system as a whole. Building and maintaining that picture require cognitive effort. Breakdowns in these cognitive processes can lead to operational difficulties in handling the demands of dynamic, event-driven incidents. In aviation circles this is known as “falling behind the plane” and in aircraft carrier flight operations it has been described as “losing the bubble” (Roberts and Rousseau, 1989). In each case what is being lost is the operator’s internal representation of the state of the world at that moment and the direction in which the forces active in the world are taking the system that the operator is trying to control. Dorner (1983) calls breakdowns in mental bookkeeping “thematic vagabonding” as the practitioner jumps from thread to thread in an uncoordinated fashion (the response in Incident #1 may have possessed an element of vagabonding).

Fischer, Orasanu, and Montvalo (1993) examined the juggling of multiple threads of a problem in a simulated aviation scenario. More effective crews were better able to coordinate their activities with multiple issues over time; less effective crews traded one problem for another. More effective crews were sensitive to the interactions between multiple threads involved in the incident; less effective crews tended to simplify the situations they faced and were less sensitive to the constraints of the particular context they faced. Less effective crews “were controlled by the task demands” and did not look ahead or prepare for what would come next. As a result, they were more likely to run out of time or encounter other cascading problems. Interestingly, there were written procedures for each of the problems the crews faced. The cognitive work associated with managing multiple threads of activity is different from the activities needed to merely follow the rules.

Obtaining a clear, empirically testable model for situation awareness is difficult. For example, Hollister (1986) presents an overview of a model of divided attention operations – tasks where attention must be divided across a number of different input channels and where the focus of attention changes as new events signal new priorities. This model then defines an approach to breakdowns in attentional dynamics (what has been called a divided attention theory of error) based on human divided attention capabilities balanced against task demands and adjusted by fatigue and other performance-shaping factors. Situation awareness is clearly most in jeopardy during periods of rapid change and where a confluence of forces makes an already complex situation critically so. This condition is extraordinarily difficult to reproduce convincingly in a laboratory setting. Practitioners are, however, particularly sensitive to the importance of situation awareness even though researchers find that a clear definition remains elusive (Sarter and Woods, 1991; Woods and Sarter, 2010).

Understanding these attentional dynamics relative to task complexities and how they are affected by computer-based systems is a very important research issue for progress in aiding situation awareness and for safety in supervisory control systems (cf. McRuer et al., (eds) 1992), National Academy of Sciences report on Aeronautical Technologies for the Twenty-First Century, Chapter 11). To meet this research objective we will need to understand more about coordination across human and machine agents, about how to increase the observability of the state and activities of automated systems, and about what are the critical characteristics of displays that integrate multiple sources of data in mentally economical ways.

FAILURES TO REVISE SITUATION ASSESSMENTS: FIXATION OR COGNITIVE LOCKUP

The results of several studies (e.g., De Keyser and Woods, 1990; Cook, McDonald, and Smalhout, 1989; Johnson et al., 1981, 1988; Gaba and DeAnda, 1989; Dunbar, 1995; Klein, Pliske, et al., 2005; Rudolph et al., 2009) strongly suggest that one source of error in dynamic domains is a failure to revise situation assessment as new evidence comes in. Evidence discrepant with the agent’s or team’s current assessment is missed or discounted or rationalized as not really being discrepant with the current assessment. The operational teams involved in several major accidents seem to have exhibited this pattern of behavior; examples include the Three Mile Island accident (Kemeny et al., 1979) and the Chernobyl accident.

Many critical real-world human problem-solving situations take place in dynamic, event-driven environments where the evidence arrives over time and situations can change rapidly. Incidents rarely spring full blown and complete; incidents evolve. In these situations, people must amass and integrate uncertain, incomplete, and changing evidence; there is no single well-formulated diagnosis of the situation. Rather, practitioners make provisional assessments and form expectancies based on partial and uncertain data. These assessments are incrementally updated and revised as more evidence comes in. Furthermore, situation assessment and plan formulation are not distinct sequential stages, but rather they are closely interwoven processes with partial and provisional plan development and feedback leading to revised situation assessments (Woods and Roth, 1988; Klein et al., 1993; Woods and Hollnagel, 2006).

In psychological fixations (also referred to as cognitive lockup and cognitive hysteresis), the initial situation assessment tends to be appropriate, in the sense of being consistent with the partial information available at that early stage of the incident. As the incident evolves, however, people fail to revise their assessments in response to new evidence, evidence that indicates an evolution away from the expected path. The practitioners become fixated on an old assessment and fail to revise their situation assessment and plans in a manner appropriate to the data now present in their world. Thus, a fixation occurs when practitioners fail to revise their situation assessment or course of action and maintain an inappropriate judgment or action in the face of opportunities to revise.

Several criteria are necessary to describe an event as a fixation. One critical feature is that there is some form of persistence over time in the behavior of the fixated person or team. Second, opportunities to revise are cues, available or potentially available to the practitioners, that could have started the revision process if observed and interpreted properly. In part, this feature distinguishes fixations from simple cases of inexperience, lack of knowledge, or other problems that impair error detection and recovery (Cook et al., 1989). As with the label “loss of situation awareness,” the problem is to define a standard to use to determine what cue or when a cue should alert the practitioners to the discrepancy between the perceived state of the world and the actual state of the world. There is a great danger of falling into the hindsight bias when evaluating after the fact whether a cue “should” have alerted the problem solvers to the discrepancy. The basic defining characteristic of fixations is that the immediate problem-solving context has biased the practitioners in some direction. In naturally occurring problems, the context in which the incident occurs and the way the incident evolves activates certain kinds of knowledge as relevant to the evolving incident. This knowledge, in turn, affects how new incoming information is interpreted. After the fact or after the correct diagnosis has been pointed out, the solution seems obvious, even to the fixated person or team.

De Keyser and Woods (1990) describe several patterns of behavior that have been observed in cases of practitioner fixation. In the first one, “everything but that,” the operators seem to have many hypotheses in mind, but never entertain the correct one. Their external behavior looks incoherent because they are often jumping from one action to another one without any success. The second one is the opposite: “this and nothing else.” The practitioners are stuck on one strategy, one goal, and they seem unable to shift or to consider other possibilities. One can observe a great deal of persistence in their behavior in this kind of case; for example, practitioners may repeat the same action or recheck the same data channels several times. This pattern is easy to see because of the unusual level of repetitions despite an absence of results. The practitioners often detect the absence of results themselves but without any change in strategy. A third pattern is “everything is O.K.” In this case, the practitioners do not react to the change in their environment. Even if there are multiple cues and evidence that something is going wrong, they do not seem to take these indicators at face value. They seem to discount or rationalize away indications that are discrepant with their model of the situation. On the other hand, one must keep in mind the demands of situation assessment in complex fields of practice. For example, some discrepant data actually may be red herrings or false alarms which should be discounted for effective diagnostic search (e.g., false or nuisance alarms can be frequent in many systems). This is essentially a strategic dilemma in diagnostic reasoning, the difficulty of which depends in part on the demands of problems and on the observability of the processes in question.

There are certain types of problems that may encourage fixations by mimicking other situations, in effect, leading practitioners down a garden path (Johnson et al., 1988; Johnson, Jamal, and Berryman, 1991; Johnson, Grazioli, Jamal, and Zualkernan, 1992). In garden path problems “early cues strongly suggest [plausible but] incorrect answers, and later, usually weaker cues suggest answers that are correct” (Johnson, Moen, and Thompson, 1988). It is important to point out that the erroneous assessments resulting from being led down the garden path are not due to knowledge factors. Rather, they seem to occur because “a problem-solving process that works most of the time is applied to a class of problems for which it is not well suited” (Johnson et al., 1988). This notion of garden path situations is important because it identifies a task genotype in which people become susceptible to fixations (McGuirl et al., 2009). The problems that occur are best attributed to the interaction of particular environmental (task) features and the heuristics people apply (local rationality given difficult problems and limited resources), rather than to any particular bias or problem in the strategies used. The way that a problem presents itself to practitioners may make it very easy to entertain plausible but in fact erroneous possibilities.

Diagnostic problems fraught with inherent uncertainties are common in complex fields of practice (Woods and Hollnagel, 2006). As a result, it may be necessary for practitioners to entertain and evaluate what turn out later to be erroneous assessments. Problems arise when the revision process breaks down and the practitioner becomes fixated on an erroneous assessment, missing, discounting or re-interpreting discrepant evidence (see Johnson et al., 1988; Roth, Woods, and Pople, 1992; McGuirl et al., 2009 for analyses of performance in garden path incidents). What is important is the process of error detection and recovery which fundamentally involves searching out and evaluating discrepant evidence to keep up with a changing incident.

Several cognitive processes involved in attentional dynamics which may give rise to fixation:

image breakdowns in shifting or scheduling attention as the incident unfolds;

image factors of knowledge organization and access that make critical knowledge inert;

image difficulties calling to mind alternative hypotheses that could account for observed anomalies – problems in the processes underlying hypothesis generation;

image problems in strategies for situation assessment (diagnosis) given the probability of multiple factors, for example how to value parsimony (single factor assessments) versus multi-factor interpretations.

Fixation may represent the downside of normally efficient and reliable cognitive processes involved in diagnosis and disturbance management in dynamic contexts. Although fixation is fundamentally about problems in attentional dynamics, it may also involve inert knowledge (failing to call to mind potentially relevant knowledge such as alternative hypotheses) or strategic factors (tradeoffs about what kinds of explanations to prefer).

It is clear that in demanding situations where the state of the monitored process is changing rapidly, there is a potential conflict between the need to revise the situation assessment and the need to maintain coherence. Not every change is important; not every signal is meaningful. The practitioner whose attention is constantly shifting from one item to another may not be able to formulate a complete and coherent picture of the state of the system. For example, the practitioner in Case 6.1 was criticized for failing to build a complete picture of the patient’s changing physiological state. Conversely, the practitioner whose attention does not shift may miss cues and data that are critical to updating the situation assessment. This latter condition may lead to fixation. How practitioners manage this conflict is largely unstudied.

Given the kinds of cognitive processes that seem to be involved in fixation, there are a variety of techniques that, in principle, may reduce this form of breakdown. Research consistently shows that revising assessments successfully requires a new way of looking at previous facts (Woods et al., 1987; Patterson et al., 2001). We provide this “fresh” point of view: (a) by bringing in people new to the situation, (b) through interactions across diverse groups with diverse knowledge and tools, (c) through new visualizations which capture the big picture and re-organize data into different perspectives. The latter is predicated on the fact that poor feedback about the state and behavior of the monitored process, especially related to goal achievement, is often implicated in fixations and failures to revise. Thus, one can provide practitioners with new kinds of representations about what is going on in the monitored process (cf. Woods et al., 1987 for examples from nuclear power which tried this in response to the Three Mile Island accident).

Note how avoiding fixation and improving the ability to revise assessments reveals the multi-agent nature of cognitive activities in the wild. One changes the architecture of the distributed system to try to ensure a fresh point of view, that is, one that is unbiased by the immediate context. In these distributed system architectures some members or teams develop their views of the evolving situation separately from others. As a result, one person or group can cross-checks the assessments developed by others. These collaborative inter-changes then can generate fresh points of view or produce challenges to basic assumptions. For example, this cross checking process is an important part of how NASA mission control responds to anomalies (Watts-Perotti and Woods, 2009; see also, Patterson, Roth et al., 2004; Patterson, Woods, et al., 2007; Klein, Feltovich, et al., 2005; Patterson et al., 2007).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.216.249