15
BALANCING ACCOUNTABILITY AND LEARNING

THE PRESSURE TO HOLD PEOPLE ACCOUNTABLE

When organizations make progress on safety by going behind the label “human error”, it is mostly because they are able to do two things:

image Take a systems perspective: accidents are not caused by failures of individuals, but emerge from the conflux or alignment of multiple contributory system factors, each necessary and only jointly sufficient. The source of accidents is the system, not its component parts.

image Move beyond blame: blame focuses on the supposed defects of individual operators and denies the import of systemic contributions. In addition, blame has all kinds of negative side-effects. It typically leads to defensive posturing, obfuscation of information, protectionism, polarization, and mute reporting systems.

There can be pressures, however, to hold practitioners (or managers) involved in an incident or accident “accountable” even if that may hamper other people’s willingness to voluntarily come forward with safety information in the future. Demands for accountability are not unusual or illegitimate, to be sure. Accountability is fundamental to any social relation. There is always an implicit or explicit expectation that we may be called upon to justify our beliefs and actions to others. The social-functionalist argument for accountability is that this expectation is mutual: as social beings we are locked into reciprocating relationships. Accountability, however, is not a unitary concept – even if this is what many stakeholders may think when aiming to improve people’s performance under the banner of “holding them accountable”. There are as many types of accountability as there are distinct relationships among people, and between people and organizations, and only highly specialized subtypes of accountability actually compel people to expend more cognitive effort. Expending greater effort, moreover, does not necessarily mean better task performance, as operators may become concerned more with limiting exposure and liability than with performing well (Lerner and Tetlock, 1999), something that can be observed in the decline of incident reporting with threats of prosecution. What is more, if accounting is perceived as illegitimate, for example intrusive, insulting or ignorant of real work, then any beneficial effects of accountability will vanish or backfire. Effects include a decline in motivation, excessive stress and attitude polarization, and the same effects can easily be seen in cases where practitioners or managers are “held accountable” by courts and other parties ignorant of the real trade-offs and dilemmas that make up actual operational work.

JUST CULTURE

The desire to balance learning from failure with appropriate accountability has motivated a number of safety-critical industries and organizations to develop guidance on a so-called “just culture.” Of particular concern is the sustainability of learning from failure through incident reporting: if operators and others perceive that their reports will not be treated fairly or lead to negative consequences, the willingness to report will decline. The challenge for organizations, though, is that they want to know everything that happened, but cannot accept everything. The common assumption is that some behavior is inherently culpable, and should be treated as such. The public must be protected against intentional misbehavior or criminal acts, and the application of justice is a prime vehicle for this (e.g., Reason, 1997). As Marx (2001, p. 3) put it, “It is the balancing of the need to learn from our mistakes and the need to take disciplinary action that (needs to be addressed). Ultimately, it will help you answer the question: ‘Where do you draw the disciplinary line?’”

Indeed, all proposals for building a just culture focus on drawing a clear line between acceptable and unacceptable behavior. For example, a just culture is one in which “front-line operators or others are not punished for actions, omissions or decisions taken by them that are commensurate with their experience and training, but where gross negligence, willful violations and destructive acts are not tolerated” (Eurocontrol, 2006). Such proposals emphasize the establishment of, and consensus around, some kind of separation between legitimate and illegitimate behavior: “in a just culture, staff can differentiate between acceptable and unacceptable acts” (Ferguson and Fakelmann, 2005, p. 34).

This, however, is largely a red herring. The issue of where the line is drawn is nowhere near as pressing as the question of who gets to draw it. An act is not an “error” in itself (this label, after all, is the result of social attribution), and an “error” is not culpable by itself either (that too, is the result of social attribution). Culpability does not inhere in the act. Whether something is judged culpable is the outcome of processes of interpretation and attribution that follow the act. Thus, to gauge whether behavior should fall on one side of the line or the other, people sometimes rely on culpability decision trees (e.g., Reason, 1997). Yet the questions it asks confirm the negotiability of the line rather than resolving its location:

image Were the actions and consequences as intended? This evokes the judicial idea of a mens rea (“guilty mind”), and seems a simple enough question. Few people in safety-critical industries intend to inflict harm, though that does not prevent them from being prosecuted for their “errors” (under charges of manslaughter, for example, or general risk statutes that hail from road traffic laws on “endangering other people”; see for example Wilkinson, 1994). Also, what exactly is intent and how do you prove it? And who gets to prove this, using what kind of expertise?

image Did the person knowingly violate safe operating procedures? People in all kinds of operational worlds knowingly violate safe operating procedures all the time. In fact, the choice can be as simple as either getting the job done or following all the applicable procedures. It is easy to show in hindsight which procedures would have been applicable and that they were available, workable and correct (says who, though?).

image Were there deficiencies in training or selection? “Deficiencies” seems unproblematic but what is a deficiency from one angle can be perfectly normal or even above standard from another.

Questions such as the ones above may seem a good start, but they themselves cannot arbitrate between culpable or blameless behavior. Rather, they invoke new judgments and negotiations. This is true also for the very definition of negligence (a legal term, not a human performance concept):

Negligence is conduct that falls below the standard required as normal in the community. It applies to a person who fails to use the reasonable level of skill expected of a person engaged in that particular activity, whether by omitting to do something that a prudent and reasonable person would do in the circumstances or by doing something that no prudent or reasonable person would have done in the circumstances. To raise a question of negligence, there needs to be a duty of care on the person, and harm must be caused by the negligent action. In other words, where there is a duty to exercise care, reasonable care must be taken to avoid acts or omissions which can reasonably be foreseen to be likely to cause harm to persons or property. If, as a result of a failure to act in this reasonably skillful way, harm/injury/damage is caused to a person or property, the person whose action caused the harm is negligent. (GAIN, 2004, p. 6)

There is no definition that captures the essential properties of “negligence.” Instead, definitions such as the one above open a new array of questions and judgments. What is “normal standard”? How far is “below”? What is “reasonably skillful”? What is “reasonable care”? What is “prudent”? Was harm indeed “caused by the negligent action?” Of course, making such judgments is not impossible. But they remain judgments – made by somebody or some group in some time and place in the aftermath of an act – not objective features that stably inhabit the act itself.

ERROR AND CULPABILITY AS SOCIAL LABELS

Just as the properties of “human error” are not objective and independently existing, so does culpability arise out of our ways of seeing and putting things. What ends up being labeled as culpable does not inhere in the act or the person. It is constructed (or “constituted”, as Christie put it) through the act of interrogation:

The world comes to us as we constitute it. Crime is thus a product of cultural, social and mental processes. For all acts, including those seen as unwanted, there are dozens of possible alternatives to their understanding: bad, mad, evil, misplaced honor, youth bravado, political heroism – or crime. The same acts can thus be met within several parallel systems as judicial, psychiatric, pedagogical, theological (Christie, 2004, p. 10):

We would think that culpability, of all things, must make up some essence behind a number of possible descriptions of an act, especially if that act has a bad outcome. We seem to have great confidence that the various descriptions can be sorted out by the rational process of a peer-review or a hearing or a trial, that it will expose Christie’s “psychiatric, pedagogical, theological” explanations (I had failure anxiety! I wasn’t trained enough! It was the Lord’s will!) as patently false. The application of reason will strip away the noise, the decoys, the excuses and arrive at the essential story: whether culpability lay behind the incident or not. And if culpable behavior turns out not make up the essence, then there will be no negative consequences.

But the same unwanted act can be construed to be a lot of things at the same time, depending on what questions we asked to begin with. Ask theological questions and we may see in an error the manifestation of evil, or the weakness of the flesh. Ask pedagogical questions and we may see in it the expression of underdeveloped skills. Ask judicial questions and we may begin to see a crime. Unwanted acts do not contain something culpable as their essence. We make it so, through the perspective we take, the questions we ask. As Christie argued, culpability is not an essence that we can discover behind the inconsistency and shifting nature of the world as it meets us. Culpability itself is that flux, that dynamism, that inconstancy, a negotiated arrangement, a tenuous, temporary stability achieved among shifting cultural, social, mental and political forces. Concluding that an unwanted act is culpable, is an accomplished project, a purely human achievement:

Deviance is created by society … social groups create deviance by making the rules whose infraction constitutes deviance and by applying those rules to particular persons and labeling them as outsiders. From this point of view, deviance is not a quality of the act the person commits, but rather a consequence of the application by others of rules and sanctions to an ‘offender’. The deviant is the one to whom the label has successfully been applied; deviant behavior is behavior that people so label. (Becker, 1963, p. 9)

What counts as deviant or culpable is the result of processes of societal negotiation, of social construction. If an organization decides that a certain act constituted “negligence” or otherwise falls on the wrong side of the line, then this is the result of using a particular language and enacting a particular repertoire of post-conditions that turn the act into culpable behavior and the involved practitioner into an offender. Finding an act culpable, then, is a negotiated settlement onto one particular version of history. This version is not just produced for its own sake. Rather, it may serve a range of social functions, from emphasizing moral boundaries and enhancing solidarity, to sustaining subjugation or asymmetric power distribution within hierarchies, to protecting elite interests after an incident has exposed possibly expensive vulnerabilities in the system as a whole (Perrow, 1984), to mitigating public or internal apprehension about the system’s ability to protect its safety-critical technologies against failure (Vaughan, 1996).

Who has the power to tell a story of performance in such a way – to use a particular rhetoric to describe it, ensuring that certain subsequent actions are legitimate or even possible (e.g., pursuing a single culprit), and others not – so as to, in effect, own the right to draw the line? This is a much more critical question than where the line goes, because that is anybody’s guess. What is interesting is not whether some acts are so essentially negligent as to warrant more serious consequences. Instead, which processes or authorities does a society (or an organization) give the power to, to decide whether an act should be seen as negligent? Who enjoys the legitimacy to draw the line? The question for a just culture is not where to draw the line, but who gets to draw it.

ALTERNATIVE READINGS OF “ERROR”

People tend to believe that an “objective” account (one produced by the rational processes of a court, or an independent investigation of the incident) is superior in its accuracy because it is well-researched and not as tainted by interests or a particular, partisan perspective. These accounts, however, each represent only one tradition among a number of possible readings of an incident or unsafe act. They offer also just one language for describing and explaining an event, relative to a multitude of other possibilities. If we subscribe to one reading as true, it will blind us to alternative readings or framings that are frequently more constructive.

CASE 15.1 SURGICAL CRIMES?

Take as an example a British cardiothoracic surgeon who moved to New Zealand (Skegg, 1998). There, three patients died during or immediately after his operations, and he was charged with manslaughter. Not long before, a professional college had pointed to serious deficiencies in the surgeon’s work and found that seven of his cases had been managed incompetently. The report found its way to the police, which subsequently investigated the cases. This in turn led to the criminal prosecution against the surgeon. Calling the surgical failures a crime is one possible interpretation of what went wrong and what should be done about it. Other ways are possible too, and not necessarily less valid:

• For example, we could see the three patients dying as an issue of cross-national transition: what are procedures for doctors moving to Australia or New Zealand and integrating them in local practice adequate?

• And how are any cultural implications of practicing there systematically managed or monitored, if at all?

• We could see these deaths as a problem of access control to the profession: do different countries have different standards for whom they would want as a surgeon, and who controls access, and how?

• It could also be seen as a problem of training or proficiency-checking: do surgeons submit to regular and systematic follow-up of critical skills, as professional pilots do in a proficiency check every six months?

• We could also see it as an organizational problem: there was a lack of quality control procedures at the hospital, and the surgeon testified having no regular junior staff to help with operations, but was made to work with only medical students instead.

• Finally, we could interpret the problem as socio-political: what forces are behind the assignment of resources and oversight in care facilities outside the capital?

It may well be possible to write a compelling argument for each explanation of failure in the case above – each with a different repertoire of interpretations and countermeasures following from it. A crime gets punished away. Access and proficiency issues get controlled away. Training problems get educated away. Organizational issues get managed away. Political problems get elected or lobbied away. This also has different implications for what we mean by accountability. If we see an act as a crime, then accountability means blaming and punishing somebody for it. Accountability in that case is backward-looking, retributive. If, instead, we see the act as an indication of an organizational, operational, technical, educational or political issue, then accountability can become forward-looking. The question becomes: what should we do about the problem and who should bear responsibility for implementing those changes?

The point is not that one interpretation is right and all the others wrong. To even begin to grasp a phenomenon (such as an adverse surgical event in a hospital) we first have to accept the relevance and legitimacy of multiple, partially overlapping and often contradictory accounts. Because outside those, we have nothing. None of these accounts is inherently right and none is inherently wrong, but all of them are inherently limited. Telling the story from one angle necessarily excludes aspects from other angles. And all interpretations have different ramifications for what people and organizations think they should do to prevent recurrence, some more productive than others.

DISCRETIONARY SPACE FOR ACCOUNTABILITY

Telling multiple different stories of failure, however, can generate suspicions that operators simply want to blame the system. That they, as professionals (air traffic controllers, physicians, pilots) do not wish to be held accountable in the same way that others would be. Of course one message of this book is that we should look at the system in which people work, and improve it to the best of our ability. That, after all, is going behind the label “human error.” But rather than presenting a false choice between blaming individuals or systems, we should explore the relationships and roles of individuals in systems. All safety-critical work is ultimately channeled through relationships between human beings (such as in medicine), or through direct contact of some people with the risky technology.

At this sharp end, there is almost always a discretionary space into which no system improvement can completely reach. This space can be filled only by an individual care-giving or technology-operating human. This is a final space in which a system really does leave people freedom of choice (to launch or not, to go to open surgery or not, to fire or not, to continue an approach or not). It is a space filled with ambiguity, uncertainty and moral choices. Systems cannot substitute the responsibility borne by individuals within that space. Individuals who work in those systems would not even want their responsibility to be taken away by the system entirely. The freedom (and the concomitant responsibility) there is probably what makes them and their work human, meaningful, a source of pride.

But systems can do two things. One is to be very clear about where that discretionary space begins and ends. Not giving practitioners sufficient authority to decide on courses of action (such as in many managed care systems), but demanding that they be held accountable for the consequences anyway, creates impossible and unfair double binds. Such double binds effectively shrink the discretionary space before action, but open it wide after any bad consequences of action become apparent (then it was suddenly the physician’s or pilot’s responsibility after all). Such vagueness or slipperiness of where the borders of the discretionary space lie is typical, but it is unfair and unreasonable.

The other thing for the system to decide is how to motivate people to carry out their responsibilities conscientiously inside that discretionary space. Is the source for that motivation going to be fear or empowerment? Anxiety or involvement? One common misconception is that “there has to be some fear that not doing one’s job correctly could lead to prosecution.” Indeed, prosecution presumes that the conscientious discharge of personal responsibility comes from fear of the consequences of not doing so. But neither civil litigation nor criminal prosecution work as a deterrent against human error. Instead, anxiety created by such accountability leads for example to defensive medicine, not high-quality care, and even to a greater likelihood of subsequent incidents (e.g., Dauer, 2004). The anxiety and stress generated by such accountability adds attentional burdens and distracts from conscientious discharge of the main safety-critical task (Lerner and Tetlock, 1999).

Rather than making people afraid, organizations could invest more in making people participants in change and improvement. Empowering people to affect their work conditions, to involve them in the outlines and content of that discretionary space, most actively promotes their willingness to shoulder their responsibilities inside of it.

CASE 15.2 PARALYTIC MISADMINISTRATION IN ANESTHESIA

Haavi Morreim (2004) recounts a case in which an anesthesiologist, during surgery, reached into a drawer that contained two vials, sitting side by side. Both vials had yellow labels and yellow caps. One, however, had a paralytic agent, and the other a reversal agent to be used later, when paralysis was no longer needed. At the beginning of the procedure, the anesthesiologist administered the paralyzing agent, as per intention. But toward the end, he grabbed the wrong vial, administering additional paralytic instead of its reversal agent. There was no bad outcome in this case. But when he discussed the event with his colleagues, it turned out that this had happened to them too, and that they were all quite aware of the enormous potential for confusion. All knew about the hazard, but none had spoken out about it. Anxiety about the consequences could be one explanation. There could have been a climate in which people were reluctant to contribute to improvements in their work, because of fear of the consequences of flagging their own potential errors. Perhaps, if they were to report their own syringe-swap near-miss, people felt at they could be sanctioned for being involved in an incident that could have harmed a patient. Do we think we can prevent anesthesiologists from grabbing a wrong vial by making them afraid of the consequences if they do? It is likely that anesthesiologists are sufficiently anxious about the consequences (for the patient) already. So should we not rather prevent them from grabbing a wrong vial by inviting them to come forward with information about that vulnerability, and giving the organization an opportunity to help do something structural about the problem?

That said, the problem of syringe swaps in anesthesia is well known in the anesthesia safety literature, but it also has proven to be impervious to quick solutions (see Cooper et al., 1978; Sandnes et al., 2008).

BLAME-FREE IS NOT ACCOUNTABILITY-FREE

Equating blame-free systems with an absence of personal accountability is inaccurate. Blame-free means blame-free, not accountability-free. The question is not whether practitioners want to skirt personal accountability. Few practitioners do. The question is whether we can meaningfully wring such accountability out of practitioners by blaming them, suing them or putting them on trial. We should instead convince ourselves that we can create such accountability not by blaming people, but by getting people actively involved in the creation of a better system to work in. Most practitioners will relish such responsibility, just as most practitioners often despair at the lack of opportunity to really influence their workplace and its preconditions for the better.

Holding people accountable and blaming people are two quite different things. Blaming people may in fact make them less accountable: they will tell fewer accounts, they may feel less compelled to have their voice heard, to participate in improvement efforts. Blame-free or no-fault systems are not accountability-free systems. On the contrary: such systems want to open up the ability for people to hold their account, so that everybody can respond and take responsibility for doing something about the problem. This also has different implications for what we mean by accountability. If we see an act as a crime, then accountability means blaming and punishing somebody for it. Accountability in that case is backward-looking, retributive. If, instead, we see the act as an indication of an organizational, operational, technical, educational or political issue, then accountability can become forward-looking (Sharpe, 2003). The question becomes what should we do about the problem and who should bear responsibility for implementing those changes. This, however, can get difficult very quickly. Lessons of an accident can get converted from an opportunity for a fundamental revision of assumptions about how the system works, to a mere local hiccup in an otherwise smooth operation (which can be taken care of by removing or punishing a few “bad apples”).

BUILDING A JUST CULTURE

“What is just?” ask colleagues in the aftermath of an incident that happened to one of them. “How do we protect ourselves against disproportionate responses?” they add. “What is wise?” ask the supervisors. “What do people – other employees, customers, the public – expect me to do?” ask managers. And then other parties (e.g., prosecutors) ask, “Should we get involved?” The confusion about how to respond justly and still maintain a sense of organizational cohesion, loyalty and safety can be considerable.

At the same time, many organizations (whether they know it or not) seem to settle on pragmatic solutions that at least allow them to regain some balance in the wake of a difficult incident. When you look at these “solutions” a little more closely, you can see that they really boil down to answers to three central questions that need to dealt with in the process of building a just culture (Dekker, 2007):

image Who in the organization or society gets to draw the line between practitioners’ acceptable and unacceptable behavior?

image What and where should the role of domain expertise be in judging whether behavior is acceptable or unacceptable?

image How protected against judicial interference are safety data (either the safety data from incidents inside of the organization or the safety data that come from formal accident investigations)?

The differences in the directions that countries or organizations or professions are taking towards just cultures come down to variations in the answers to these three questions. Some work very well, in some contexts, others less so. Also, the list of solutions is far from exhaustive, but it could inspire others to think more critically about where they or their organization may have settled (and whether that is good or bad).

In general, though, we can say this for the three questions. On the first question, the more a society, industry, profession or organization has made clear, agreed arrangements about who gets to draw the line, the more predictable the managerial or judicial consequences of an occurrence are likely to be. That is, practitioners may suffer less anxiety and uncertainty about what may happen in the wake of an occurrence, as arrangements have been agreed on and are in place.

On the second question, the greater the role of domain expertise in drawing the line, the less practitioners and organizations may be likely to get exposed to unfair or inappropriate judicial proceedings. That said, there is actually no research that suggests that domain experts automatically prevent the biases of hindsight slipping into their judgments of past performance. Hindsight is too pervasive a bias. It takes active reconstructive work, for everyone, to even begin to circumvent its effects. Also, domain experts may have other biases that work against their ability to fairly judge the quality of another expert’s performance. There is, for example, the issue of psychological defense: if experts were to affirm that the potential for failure is baked into their activity and not unique to the practitioner who happened to inherit that potential, then this makes them vulnerable too.

On the third question, the better protected safety data is from judicial interference, the more likely practitioners could feel free to report. The protection of this safety data is connected, of course, to how the country or profession deals with the first and second question. For example, countries or professions that do protect safety data typically have escape clauses, so that the judiciary can gain access “when crimes are committed,” or in “justified cases when duly warranted,” or “for gross negligence and acts sanctioned by the criminal code.” It is very important to make clear who gets to decide what counts as a “crime”, or “duly warranted” or “gross negligence”, because uncertainty (or the likelihood of non-experts making that judgment) can once again hamper practitioners’ confidence in the system and their willingness to report or disclose.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.174.174