17

Expertise and Technology: “I Have a Feeling We Are Not in Kansas Anymore”

Erik HOLLNAGEL

Human Reliability Associates

Jean-Michel HOC

CNRS - University of Valenciennes

Pietro Carlo CACCIABUE

CEC Joint Research Centre, Ispra, Italy

It is a difficult and even daunting task to summarize this book. In it are fifteen chapters — sixteen if you include the introduction — which cover a wide range of topics and viewpoints. Each chapter has presented a perspective on expertise and technology, and although many of the chapters support and complement each other, none of them are redundant. We have tried to put them together in three main groups, but the fact remains that the book in a sense presents fifteen dimensions of expertise and technology. As if this was not enough, there is no guarantee that there may not be additional ones. We are thus faced with a set of problems that is more complex than we really would like it to be. As the fifteen contributions have made clear, we are not near the stage where a unified view on expertise and technology can be proposed. This is perhaps characteristic of the field itself: Expertise is not a one-dimensional phenomenon and cannot be given a single — or simple — description.

THE MULTIPLICITY OF EXPERTISE

That expertise is not a one-dimensional phenomenon means that it does not succumb to a binary classification, i.e., a person is not either an expert or a nonexpert. Expertise comes in degrees and there is no absolute threshold beyond which a person becomes an expert; neither will a person remain an expert forever. The degree of expertise that a person has may vary with the state of mind and with the context. Expertise can be defined as the knowledge that a person can bring to bear on a situation. This knowledge is a function of two things, firstly the knowledge that the person in principle has, and secondly the subsets of knowledge that the person can actually make use of in the specific situation, i.e., the knowledge that the person can access, remember, or retrieve. We can also call these the potential competence and the actual competence.

This distinction is not trivial. The working conditions will often be counterproductive to the use of the potential competence, for instance if time is limited. Even though an operator may be capable of performing well, i.e., be a potential expert, time pressure may severely impair performance. This emphasizes the point that there are two kinds of expertise or knowledge. A person can be an expert in knowing what to do, i.e., having substantial knowledge about how a system works, what the fundamentals of a process are, how the system is composed, how it depends on other processes, etc. A person can also be an expert in knowing how to do things, for instance how to manage a difficult situation, how to perform a diagnosis when the symptoms are unrecognizable, etc. The expertise that a person can use in a situation is determined by the extent of “knowing what” and of “knowing how.” In particular, technology may influence how easy it is to comprehend a situation and how easy it is to apply the appropriate knowledge. In many cases technology exerts a negative influence which makes it more difficult for the operator to understand the situation. The essence of expertise is, however, the ability to see what is relevant and then to be able to get that knowledge — from memory or from other sources.

The point is sometimes made that cognition is situated, which means that it depends on the context. This point may be worth making when the background is knowledge engineering and knowledge representation, as these techniques have developed in the field of expert systems. Here the aim has been the isolation of knowledge and expertise, and the efforts have been directed at finding that which can be described separately from being in the situation. Yet if the basis has been the study of the expertise that people have in their daily work, i.e., the study of praxis seen as the unification of theory and practice, then it is trivial to say that cognition and expertise depends on the context. In fact, it will be preposterous to say that expertise can be context independent. Expertise is the bright elusive butterfly of knowing how to cope with the vagaries of the real world. If expertise could be formalized then, almost by definition, we would no longer need operators. We could implement the expertise in an information processing artifact and be done with that. Expertise, however, resides in people, not in machines.

MENTAL MODELS AND TRAINING

Expertise is frequently discussed in terms of the operator’s mental model of the domain or of the system. By this we mean that people who are familiar with a task or an application have learned something that is essential to make performance more efficient. This knowledge, and in particular the knowledge about the internal functioning of the system, is commonly referred to as the mental model. In particular, it is a common finding that a higher degree of expertise will enable the operator to remain in control in a larger number of situations and to perform with the use of less effort. We say that performance is efficient because the expert has an adequate mental model of the domain. (Note, however, that this almost amounts to a definition of what a mental model is. If the idea of a mental model is not defined independently of the observable performance characteristics, then there is a potential circularity in the arguments.) According to this logic it should therefore be possible to improve performance by making sure that the operator has the proper mental model from the beginning, i.e., that the operator is provided with the mental model rather than having to build it up in a slowly and elaborate way.

There are several ways in which we can try to ensure that the operator has the appropriate knowledge to perform efficiently in a task. The three main avenues are system design, training, and selection. Unless operators already have the requisite expertise, by having been carefully selected from a larger population, then the mental model is usually provided through training and instruction. Training confronts the operator with the knowledge needed to control a system. But what should the basis for training and instruction be? In other words, how can we know what the correct mental model is? Should training be based on design knowledge or on operational knowledge?

It is indeed tempting to base training on design knowledge. System designers may be able clearly to say that to use the system well — or to use it at all — the operator must know “how to do X, how to avoid Y, and how to diagnose Z.” To employ the functionality of the system operators must have a certain competence, and that can be provided through training. A possible danger of this view is that training can be used as a compensation for design deficiencies, or as an easy way to adapt the system to changed operating conditions. Another concern is that the designer’s mental model may fail to match reality.

This can happen in two different ways. The first is that designers may not fully take into account the actual working conditions that operators have to cope with. The working conditions are not only the exchange of information across the man-machine interface but also the way in which work is embedded in a larger context — consisting of the technical and administrative organization, of colleagues and authorities, of resources and demands to the production, of the local and global environment. The second is that designers may have too simple a view of human cognition — despite being humans themselves. The metaphor of the human as an information processing system has been extremely successful, to the extent that the disadvantages has begun to outweigh the advantages. It cannot be emphasized strongly enough that the human mind is not an information processing machine, and that it is misleading to describe human cognition as simple information processing. The human operator does not just react to events; the operator acts in a context which is shaped by the all the information available — from the mind, from the process, and from the environment.

Although design knowledge is a useful input for training, it cannot be the sole source. Training must also take into account operational experience and empirical data. It is useful to formalize the basis for training by means of e.g. a task analysis. If the system has not yet been built, the task analysis should consider the design basis, similar systems — e.g. older versions without the new design - or data from experimental situations that reproduce the essential features of the design. If the system exists, it is, of course, essential that the operational experience is fully considered.

The mental model can be shaped through training, but only partly so. A more pervasive influence comes from the actual system design. While training is an isolated and often irregular experience, the influence from system design is present at every moment. It is therefore of little use to teach operators to do one thing if reality forces them to do something else. (The reason for that can be design deficiencies, maintenance, modifications, etc.) It is through actual experience that expertise gradually is built - and the expertise may be how to make the system work despite the design! Feedback from operational experience is therefore essential to determine what the operators ought to know and what their mental models actually are. Yet such feedback will also show that there is no single mental model for a task. Although there obviously are significant commonalties — because the nominal working conditions are the same — there will also be vast differences between the mental models that individual operators develop. Mental models have at least as many dimensions as expertise — which is another way of saying that the notion of a single and common mental model as the basis for training is a deceptive simplification. Operators can therefore not be “designed” through training.

SYSTEM DESIGN, AUTOMATION, AND HUMAN ADAPTATION

The design of an interactive system is based on encapsulated expertise and the results express the consequences of that expertise. Scientists and developers try to formulate the necessary expertise in a concise way, to enable system designers to make the right decisions. An important issue is the distribution of roles and responsibilities between the operators and the technology. One version of that is the level of automation, i.e., how much of the system functionality should be automated, how much should be left for the operators to do, and under what conditions the transition from automation to manual operation should be made.

When a system provides support for the operators — as diagnostic support, as planning support, as procedural support, or as automation in general — it is important that the operators can understand why a specific recommendation is given and how the support or the automation functions. Despite steady technological advances operators are still required to intervene in the control of automated systems, for instance to cope with emergencies or to improve the productivity of discrete manufacturing processes. To do that appropriately should not require that the operators must monitor the functioning of the support system or the automation, since that would mean the addition of yet another task. The operator should not have to become an expert of the support system as well as of the process. The understanding can be achieved if the expertise, on which the support system is based, is apparent from the way in which the system works. Automation and support systems must not be impenetrable black boxes, since that would leave the operator guessing when an intervention would be appropriate and what should be done. Experimental investigations have shown that intervention is governed to a large extent by the operators’ trust in the efficacy of the automated systems and their confidence in their own abilities as manual controllers. This can obviously be enhanced by making the system easier to understand.

One of the main concerns throughout this book has been adaptation, as a crucial aspect of dynamic environment supervision and control. It is a primary concern of system design to reduce variability in order to insure adequate product quality, system safety, and — not to forget — lower production costs. The design specifies how the system — the automation together with the humans — should operate and respond under given conditions. It follows that the functionality of the system will only be appropriate if the actual conditions match the expected conditions. In other words, the intentional structure of the system is only valid within the boundaries of a synthetic world where the unexpected does not happen. The real world, unfortunately, has a deplorable tendency to go beyond these artificial imposed boundaries; and if the conditions are unfavorable the result may be a major accident.

In order to improve these situations it is necessary to recognize the natural limitations of a deterministic design. The obvious solution is to make use of the flexibility of the operator, since that cannot be bound by design specifications. It is the expertise of the human operator that makes it possible to adapt the performance of the joint system, in real time, to unexpected events and disturbances. Every working day, across the whole spectrum of human enterprise, a large number of near-misses are prevented from turning into accidents only because human operators intervene. The system should therefore be designed so that human adaptation is enhanced. This will ensure that the joint system has the requisite variety to maintain control in an unpredictable environment.

Several chapters of this book has focused on a major aim of human-machine cooperation: the operator’s supervision of the joint system. Cooperation should, however, be understood as a symmetric concept — technology and automation should help the humans to perform their tasks and humans should be able to improve and facilitate the performance of the technological parts of the system. Although the cooperation nominally is between humans and technology, it can also be seen as a cooperation between humans — except that one of the participants is represented by the implemented design. The machine’s capabilities for adaptation have been explicitly provided by designers. It is thus a kind of vicarious cooperation, which makes it all the more important that designers fully understands the situations they help create.

ERRONEOUS ACTIONS

Understanding the system is important in another sense because it may reduce the occurrence of erroneous actions. Although erroneous actions are inevitable, they are now usually seen as the result of a mismatch or a discrepancy between the actual and assumed modes of system functioning rather than as a result of inherent weaknesses of human cognition. Increased expertise will consequently lead to a change in the occurrence of erroneous actions because the discrepancies will be different — and hopefully fewer. Yet even operators with a high degree of expertise will every now and then do things that lead to an undesired result, either because the situation is unfamiliar or because they rely too much on their routine and experience. It is, in practice as well as in principle, impossible completely to eliminate erroneous actions. Fortunately, erroneous actions are important sources of information about the nature of the operators’ expertise, and in particular about how well their mental models comply with the stipulated requirements to competence. As such the study of erroneous actions is vital for both system design, training, and learning.

Every adaptive system needs to fail now and then in order to be able to adapt. Without failures — in a wide sense of the word — there will never be any difference between expected and actual outcomes, hence no opportunity to improve performance. Erroneous actions are therefore essential for the operator to learn about the world and the actions. (Note, by the way, that an action is classified as erroneous only in hindsight.) Put differently, if all actions lead to success, i.e., if there never are any unwanted consequences, then there is no reason to change the actions, hence no need to learn. But if actions sometimes do not lead to the desired consequences, i.e., if there are failures, then it is necessary for the operator to determine why the failure occurred and adapt or change the behavior accordingly. This means changing the basis for the behavior, e.g. the knowledge, the assumptions (models), the ways of reasoning and deciding, etc. It is therefore important that the system provides the operator with adequate feedback. Dangerous consequences of erroneous actions should naturally be prevented, but without removing the feedback necessary for adaptation. This was emphasized by Reason (1988, p. 7) who noted that “system designers have unwittingly created a work situation in which many of the normally adaptive characteristics of human cognition are transformed into dangerous liabilities.”

Human performance is complex. Simon (1969, p. 25) offered the hypothesis that the apparent complexity of human behavior was a reflection of the complexity of the environment, rather than of the complexity of human cognition (or of human information processing). In other words, we are forced to complex behavior because we do not understand the complexity of the systems we have to deal with. There are several problems with this hypothesis. Firstly, the complexity of the environment is relative to the degree of expertise, i.e., the complexity is subjective rather than objective. One could rightly argue that a high degree of expertise amounts to a high level of complexity of cognition and of knowledge. Secondly, if the hypothesis was correct then we should be able to produce simple performance just by making the environment less complex. Since simple performance presumably is both more efficient and less riddled by erroneous actions, many of the problems we struggle with could be solved in a simple stroke. One may wonder why no one has done it yet.

One reason for that might be that the underlying premise is wrong. In other words, the complexity of human performance is not just a result of the complexity of the environment (which, after all, to a large extent is man-made). It is also a result of the complexity of human cognition. As several of the studies in this book have shown, human cognition is not a simple or single process that goes on during a given time interval. A diagnosis, for instance, may involve many separate activities that take place in parallel so that, in a sense, there is no single process of diagnosis. Even if we analytically go to a higher level of abstraction and refer to diagnosis as a concerted effort or function, or even if diagnosis could go on sequentially and unperturbed from start to end, then that diagnosis would itself be embedded in other activities and in other tasks. This goes to show two essential things about human expertise: (1) actions and performance are always continuous, i.e., they do not abruptly start and stop; and (2) actions and performance — and therefore also cognition — are always part of a context.

WHY MUST WE BE EXPERTS?

Why does technology require that we are experts? If the answer is that we must be experts to cope with the complexity that technology carries with it, then why are technological systems so complex?

The complexity of technological systems can be reduced by decomposition or by isolating specific aspects of the functionality. Yet even if individual systems, or parts of systems, can be made simple — thereby improving usability and reducing the need for expertise — the technology we use is coupled. Systems, sub-systems and components are interdependent, and when the parts of the system have to work together it becomes necessary at some stage and at some level to aggregate the decomposed system. This aggregation will inevitably require expertise, but if this expertise is concentrated on the design level the system becomes brittle.

High usability at one point in the system will therefore require high complexity in other parts. Complexity can only be reduced so long as the actual operating conditions comply with the design assumptions. When something goes wrong, the full complexity of the system strikes back — and pity the user who is not an expert. A good example of that is in the field of automation. A highly automated system, for instance a flight management system, can be simpler to use and easier to learn, but the operator is needed in whenever automation fails — in which situation the system no longer is so simple. In other words, although the operator’s expertise is not required when everything works well and smoothly, it is definitely required when things begin to break down. The need for expertise is only curtailed as long as the system stays within the narrow limits of normal operation. This sharpens the distinction between normal and abnormal states, but also makes the transition from automatic to manual functions more abrupt, which in turn makes the situation more difficult for the operator. From the view of cognitive systems engineering it would be more sensible to have a gradual transition from normal to abnormal situations. Expertise can only develop through experience in a relatively stable environment, and if an operator is deprived of that experience it will be difficult to become an expert. Vicarious experience provided through training can only be a partial substitute for the real thing. Attempts of codifying the experience and encapsulating it in operator support systems (such as expert systems and the like) are bound to aggravate the situation, because these solutions remove the operator even further from the process. Embedding the operator in additional layers of technology and complexity increases the needs for expertise without giving it a natural opportunity to grow. Therefore, as long as complexity cannot be removed through radical changes to system design, operational expertise will be needed.

THE YELLOW BRICK ROAD

When we talk about expertise and technology, we are definitely not in Kansas anymore. We have been uprooted by a hurricane called “technology and complexity”, and find ourselves in a strange country that perhaps is Oz, but perhaps is not. We should recognize the possibility that there may not be a benign wizard at the end of the yellow brick road who can bring us back to the state of innocence. There may not even be a yellow brick road. We therefore need considerable courage, brains, and heart to find our way through the complexity we have been thrown into. This book has hopefully provided a good description of where we stand, and also of what some of the possible routes are. Bon Voyage!

REFERENCES

Reason, J. (1988). Cognitive aids in process environments: prostheses or tools? In E. Hollnagel, G. Mancini, & D.D. Woods (Eds.), Cognitive engineering in complex dynamic worlds (pp. 7–14). London: Academic Press.

Simon, H. A. (1969). The sciences of the artificial. Cambridge, MA.: The MIT Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.37.191