1

Work with Technology: Some Fundamental Issues

Erik HOLLNAGEL

Human Reliability Associates

Pietro Carlo CACCIABUE

CEC Joint Research Centre, Ispra, Italy

Jean-Michel HOC

CNRS - University of Valenciennes

COGNITION AND WORK WITH TECHNOLOGY

Working with technology is tantamount to working in a joint system where in every situation the most important thing is to understand what one is supposed to do. The joint system is the unique combination of people and machines that is needed to carry out a given task or to provide a specific function. In this book the focus is on a particular group of people that are called operators. An operator is the person who is in charge of controlling the system and who also has the responsibility for the system’s performance. In the joint system, both operators and machines are necessary for function; it follows that operators need and depend on machines and that machines need and depend on their operators. The decision of how far to extend the notion of people and machines, that is, how much to include in the description of the system, is entirely pragmatic and should not worry us in this context. The important thing is to recognize that the joint system exists in an organizational and social context, and that it therefore should be studied in vivo and not in vitro. A particular consequence of this is that expertise should not be seen as the individual mastery of discrete tasks, but as a quality that exists in the social context of praxis (cf. Norros, chapter 9, this volume). Where the boundaries of the joint system are set may, for all practical purposes, be determined by the nature of the investigation and the level of the analysis. In some cases, the boundaries of the joint system coincide with the physical space of the control room. But it is frequently necessary to include elements that are distributed in space and time, such as management, training, safety policies, software design, and so forth.

It is quite common to refer simply to a man-machine system (MMS)1, hence a joint system as the combination of a human and a machine needed to provide a specific function. The reason for using the singular “man and machine” rather than “people and machines” is partly tradition and partly the fact that we are very often considering the situation of the individual operator (although not necessarily an operator who is single or isolated). The term machine should not be understood as a single physical machine, for example, a lathe, a pump, or a bus, but rather as the technological part of the system, possibly including a large number of components, machines, computers, controlling devices, and so forth. An example is an airplane, a distillation column, a train, or even a computer network. Similarly, the term man should not be understood as a single person (and definitely not as a male person) but rather as the team of people necessary for the joint system to function. An example is the team of controllers in air traffic control.

The onus of understanding, of course, lies with the operator. Although it does make sense to say that the machine must, to a certain degree, understand the operator, the machine’s understanding is quite limited and inflexible — even allowing for the wonders that artificial intelligence and knowledge-based systems may eventually bring. In comparison, the operator has a nearly unlimited capacity for understanding. The operator, who in all situations understands what to do, is a de facto expert and that expertise is slowly developed through use of the system. Given enough time, we all become experts of the systems we use daily. Some of us become experts in the practical sense that we are adept at using the system. Some of us become experts in the sense that we know all about the system, how it works, what the components are, how they are put together. Some of us become experts in the sense that we can explain to others how to use the system, or how the systems really should be working.

Understanding How A System Works

If we consider working with a system that is completely understood, the use of it will be highly efficient and error free. The system can be completely understood either because it is so simple that all possible states can be analyzed and anticipated, or because its performance is stable and remains within a limited number of possible states although the system itself may be complex. (The two cases are, of course, functionally equivalent.) An example of the former is writing with a pencil. There is practically no technology involved in using the pencil, save from making sure that the lead is exposed, and there are few things that can go wrong (the lead can break, the pencil can break, the paper can tear).2 Furthermore, everyone within a given culture, say, the Western civilization, knows what a pencil is for and how it should be used. (To illustrate that this is not an obvious assumption, consider for a moment how many Europeans would be able to write with ink and brush as effortlessly as the Japanese can.) Instructions are therefore not necessary and the person can use the pencil freely for whatever he wants (and problem-solving psychologists enjoy finding alternative uses). When it comes to pencils, we are all experts. We know how to use them and we can probably also explain how they should be used and why they work — at least on a practical level.3

This example is deliberately trivial, but things need only get slightly more complicated to see the problems that are characteristic of work with technology. Even a mechanical pencil or a ball-point pen may suffice, because the mechanism by which it is operated may not be obvious. Consider, for instance, how many different ways in which a mechanical pencil or a ball-point pen can be made. The mode of operation is not always obvious and getting the device into a state where it can be operated (e.g., where the lead is exposed so the pencil can write) can present a problem, although usually a small one. A second difference is that it is no longer possible to observe directly the state of the system (e.g., how much lead that is left). The same goes for ball-point pens and fountain pens; it may, for instance, be impossible to write with a ball-point pen because it is out of ink, because the ink has dried, because there is not enough friction on the surface, because the ball has gotten stuck. To find out what is the cause requires diagnosis. When it comes to mechanical pencils and ball-point pens we still all know how to write with them.4 We can usually make them work, but it is less easy to explain how they work; for instance, what the internal mechanism of a ball-point pen is. Fortunately, we do not necessarily need to know the details of the mechanism in order to use the ball-point pen. Even for such a simple machine there are, however, several ways in which the device can malfunction, thereby introducing issues of diagnosis and repair.

An example of the latter, a complex system with a highly stable performance, can be found in many places. In daily life we need only think of cars and computers. In working contexts many processes spend most of the time in a highly stable region of normal performance, which may mislead the operator to think that he understands the system completely. In fact, it is the noble purpose of design and control engineering to ensure that the performance of the system is highly stable, whether it is a refinery, a blast furnace, a nuclear power plant, or an aircraft. If the presentation of the system states is mediated by a graphical user interface, the resulting “corrupted reality” may foster an impression of complete understanding. As long as the system performance remains stable, this does not present a problem. But the moment that something goes wrong, and in complex systems this seems to be inevitable (Perrow, 1984), the brittleness of the understanding becomes clear.

In order to work efficiently with technology we must have a reasonable degree of understanding of how we can get the technology or the machine to function. We need to be practical experts, but not theoretical ones. We can become practical experts if the machine is well designed and easy to use, that is, if the functions of the system are transparent (Hollnagel, 1988), if the information about its way of functioning and its actual state (feedback) is comprehensible, if it is sufficiently reliable to offer a period of undisturbed functioning where learning can take place, and if we are provided with the proper instructions and the proper help during this learning period. But we are not really experts if we only are able to use the system when everything works as it should, but unable to do so if something go wrong. All these are issues that are important for the use of technology, and are treated in this volume. And all are related to human cognition, and in particular to the ways in which we can describe and explain human cognition.

Working With Dynamic Systems

This book is about working with dynamic systems. It is characteristic of a dynamic system that it may evolve without operator intervention. Thus, even if the operator does not interact with the system, for instance, when he is trying to diagnose or plan, the state of the system may change.5 The interaction may be paced or driven by what happens in the process, and the system does not patiently await the next input from the operator. Much of the work done in the field of human-computer interaction (HCI) refers to systems which are nondynamic, systems where there is little time pressure and where there are few, if any, consequences of delaying an action (Hollnagel, 1993a). The study of MMS and the study of HCI intersect in the design of interfaces for process control. The differences between dynamic and nondynamic systems, however, means that the transfer of concepts, methods, and results between the two disciplines should be done with caution.

Working with a dynamic system not only means that time may be limited, but also means that the mental representation of the system must be continuously updated. The choice of a specific action is based on the operator’s understanding of the current state of the system and his expectations of what the action will accomplish.

If the understanding is incomplete or incorrect, the actions may fail to achieve their purpose. But dynamic systems have even more interesting characteristics as follows (Hoc, 1993):

•    the system being supervised or controlled is usually only part of a larger system, for example, a section of a production line or a phase of a chemical process;

•    the system being controlled is usually dynamically coupled with other parts of the system, either being affected by state changes in upstream components or itself causing changes in downstream components;

•    effective process control often requires that the scope is enlarged in time or space; the operator must consider previous developments and possible future events, as well as parts of the system that are physically or geographically from the present position;

•    crucial information may not always be easy to access, but require various degrees of inference;

•    the effects of actions and interventions may be indirect or delayed, thus introducing constraints on steering and planning; this effect may be worsened by using the corrupted reality of advanced graphical interfaces (Malin & Schreckenghost, 1992);

•    the development of the process may be so fast that operators are forced to take risks and resort to high-level but inaccurate resource management strategies;

•    and finally process evolution may be either discontinuous, as in discrete manufacturing, or continuous, as in the traditionally studied processes.

In the context of dynamic systems, expertise refers to the availability of operational knowledge that has been acquired through a prolonged experience with the plant, rather than to academic or theoretical knowledge based on first principles. The operational knowledge is strongly linked to action goals and the available resources. It contains a large amount of practical knowledge that is often weakly formalized and partly unknown to engineers, because it has arisen from situations that were not foreseen by the plant designers. Operational knowledge is structured to facilitate rapid actions rather than a complete understanding. It is concerned with technical aspects of the process as well as the lore that goes with the process environment. The latter is crucial in the supervision of processes with small time constants where resource management is of key importance.

INTENTION AND WORK WITH TECHNOLOGY

When people work with technology they usually have an intention, a formulated purpose or goal that guides and directs how they act as well as how they perceive and interpret the reactions from the machine or the process. The intentions can be externally provided by instructions, by written procedures, or by unwritten rules, or, be a product of the operator’s own reasoning — and, of course, a mixture of the two. In the former case, it is important that operator can accept and understand the goals that are stated in the instructions and that he knows enough about the machine to be able to comply with these goals. In the latter case, it is important that the operator has an adequate understanding or model of the machine, because otherwise he may reason incorrectly and reach the wrong conclusions, and follow goals that are not appropriate or correct.

When people work together, they are usually able to grasp the intentions of each other, either implicitly through inference or explicitly through communication. This mutual understanding of intentions is actually one of the foundations for efficient collaboration. Conversely, misunderstanding another operator’s intentions may effectively block efficient collaboration and lead to unwanted consequences.

In work with technology, two problems are often encountered. The first is that operators may have problems in identifying or understanding a machine’s intentions; clearly, a machine does not have intentions in the same way that a human has. Yet the machine has been designed with a specific purpose (or set of purposes) in mind, and the functionality of the machine is ideally an expression of these purposes. Therefore, if operators understand the purpose or the intention (the intended function) of the machine, as it is expressed through the design, they may be in a better position to use it efficiently and effortlessly. This understanding can be facilitated by an adequate design of the interface, in particular an adequate presentation of system states, system goals, and system functions. Much of this understanding is, however, achieved through practice. By working with the system, the operators gradually come to understand how it works and what the designers’ intentions were. This can clearly only be achieved if the system has long periods of stable performance. Consequently, that operators have little possibility of understanding the system in abnormal situations where paradoxically the need for understanding may be largest. It is therefore very important that designers realize this problem, and increase the emphasis on proper interaction during contingencies.

The second problem is that the machine has no way of understanding the operator’s purpose or intentions. (We here disregard the few attempts in artificial intelligence to apply intent recognition because they are of limited practical value.) This means that the machine is unable to react to what happens except by means of predefined response patterns (cf. Hollnagel, chapter 14, this volume). Many years ago, Norbert Wiener pointed out that the problem with computers (hence with technology and machines in general) is that they do what we ask them to do, rather than what we think we ask them to do. If machines could only recognize what the operator intended or wanted them to do and then did it, life would be so much easier.

The operator’s intentions must somehow be translated into actions to achieve the goal. We must therefore be able to establish a correspondence between how we believe the machine works and how it actually works. This has also been expressed as the problem of mapping between psychological variables and physical variables. Norman (1986), in particular, has described what he called the gulf of execution and the gulf of evaluation. The gulf of execution denotes the situation that exists when the operator has a goal but does not know how to achieve it, when he does not know how to manipulate or control the machine. The gulf of evaluation characterizes the situation where the operator does not understand the measurements and indications given off by the system, when he cannot interpret or make sense of the current system state. Another way of expressing this is by saying that we must have a model of the system which enables us to come up with the appropriate sequence of actions as well as to understand the indications and measurements that the system provides. However, expressing it using the model metaphor hardly makes the problem any easier to solve! For the novice, the gulfs or execution and evaluation can be serious impediments for work while for the expert, the gulfs of execution and evaluation are fissures rather than chasms.

COOPERATION BETWEEN PEOPLE AND MACHINES

A main concern of this book is the cooperation between people and machines, today epitomized by the coveted cooperation between humans and computers. As defined above, this cooperation takes place in the context of a joint system where people and machines mutually depend on each other. Cooperation in itself requires that the two cognitive systems recognize that there is a coupling or dependence between the tasks they must carry out to attain their goals, and further that they both decide to modify their tasks to prevent possible conflicts and achieve a mutual advantage. (A highly relevant, but more formal, treatment of cooperation can be found in n-person game theory, cf. Poundstone, 1993; Rapoport, 1970.) The need for cooperation is obviously made easier if there is a representation of a common goal, but this is not strictly necessary. The common goal may simply be to avoid conflicts (for example, access to resources) or to increase efficiency and the need to cooperate may be realized as the actions are carried out. Cooperation between people can occur spontaneously because they can recognize the intentions of each other. Cooperation between people and machines must, in contrast, be planned and prepared by the system’s designers. In both cases it becomes easier to establish the cooperation the greater the operator’s experience and expertise is.

In order for the joint system to function efficiently, reliably, and safely it is essential that it is controlled. The essence of control is that unwanted deviations from the desired or prescribed course of development are reduced to a minimum or do not occur at all. Control can be accomplished if one part of the joint system completely controls the other, or if they jointly cooperate to achieve and maintain the needed equilibrium (e.g., the cybernetic description of the homeostat [Ashby, 1956]). As humans, we would like to think of the situation where we are in complete control of the machine. However, there are many cases where the opposite is actually the case, although they are not always recognized as such (for example, modern glass cockpits with displays driven by computer graphics systems). With complex systems, exemplified by most of the process environments that are described throughout this book, it is practically impossible for one part of the system to achieve complete control over the other. The couplings and interactions are simply too many and too complex to be exhaustively analyzed and described, hence to be covered by the design. It is furthermore unreasonable to expect that the operator of the system should be able to do what the designer could not, to understand adequately what goes on during the use of the system. A reasonable degree of control may be attained for the normal range of situations, which basically means those situations that occur frequently enough for learning to occur. This control is achieved because the operator becomes an expert, rather than because the machine is well designed. (If the system was well designed, then there really would not be any need for the operator to learn and become an expert!) But there will always be a much larger number of situations that deviate from the normal in one way or another, and where control therefore will be insufficient or lacking.

Returning to the gulfs of execution and evaluation, it seems appropriate to distinguish between the gulfs as they exist for normal conditions and for contingencies. When the system has been working for some time, it is reasonable to expect that the gulfs of execution and evaluation for the normal (or daily) range of situations have shrunk. This happens because the operator becomes an expert, either system or task tailoring (Cook, Woods, McColligan, & Howie, 1990). But in situations that deviate from the normal, the gulfs of execution and evaluation will clearly remain. Here the operator no longer is an expert and there may be little or no support in the machine’s functionality because the designers have not anticipated these cases. There are two issues for the cooperation between men and machines: first, how to ensure that the gulfs of execution and evaluation narrow quickly for normal situations (by supporting and enhancing the building of expertise) and, second, how to ensure that the gulfs of execution and evaluation are not insurmountable in case of unexpected situations and events. To remain with the metaphor, if the gulfs are insurmountable or cannot be bridged, the operator cannot do anything to achieve his goals, nor can he understand or interpret the system correctly. The occurrence of these conditions should clearly be prevented at all cost. In practice most situations are neither absolutely safe nor absolute disasters. They rather occupy a middle ground, which provides a fertile field for research and development.

The chapters of this book describe in detail the many aspects of the cooperation between people and machines. Without preempting the conclusions, we can safely say that, on the whole, we do not completely understand the technology or the machines we work with. Understanding may be incomplete, both in terms of knowing how the technology can be made to work (the gulf of execution), and in terms of comprehending the basic principles of functioning (the preceding discussion of intention). Joint systems can function in many modes and respond in many different ways; this multiplicity creates a complexity that usually goes beyond what humans are able to grasp. In consequence of that, we do not always completely understand what we need to do to achieve a specific goal state or what the consequences of an action will be. In order to cope with the complexity, we usually rely on simplified descriptions or models of the system — both of the machines and of the people. Such descriptions are, however, only adequate for normal or frequently occurring situations where the conditions do not deviate too much from what was assumed or expected. In all other cases, the descriptions will be lacking in one way or the other and it is therefore difficult both to control the machine and to predict what the effects of specific actions will be, hence to form intentions and understand their consequences.6

The increasing amount of automation in industrial systems has progressively removed the operator from the actual control loop, associating the human role to supervisor of the process development. This does not imply that the operators are outside the plant management. On the contrary it means that the design of the control and management strategies of the plant must adopt a new perspective that includes the cooperation between humans and “intelligent” support systems or joint cognitive systems. Here techniques such as Distributed Artificial Intelligence (DAI) and Multilevel Flow Modeling (MFM) are used to develop new approaches to design that can account for such cooperation.

Furthermore, the supervisory role of the humans in many cases implies the presence of a number of human supervisors. This raises to the problem of extending the joint system cooperation to include also the human-human interaction. In particular, strategies for training and the building of expertise requires a direct feedback from the observation of real work setting for the development of appropriate models of collaboration.

JOINT SYSTEMS AND ERRONEOUS ACTIONS

The term human error is commonly used to describe a certain class of human actions. As clearly demonstrated by Hollnagel (1993c), the term is seriously misleading because it can denote a cause as well as a class of actions. A better term is therefore erroneous action, which clearly refers to an event that was deemed to be incorrect.

Although erroneous actions are unavoidable, they are not necessarily only harmful. A so-called error is often the result of the operator’s attempt to achieve a goal in an environment that is uncertain and incompletely known (cf. Rizzo, Ferrante, & Bagnara, chapter 12, this volume). Failure is necessary for adaptation to occur and adaptation is necessary because the system is incompletely specified and incompletely known. Even after long experience, operators will encounter situations where they do not know with certainty whether their actions will achieve the desired goal. If the system is sufficiently resilient and forgiving, imprecise actions may be beneficial because they create a potential for learning. Designer engineers should therefore not aim to eradicate human erroneous actions completely, but rather try to design systems that will be conducive to learning, while also protecting themselves against serious harmful effects of incorrect actions.

Many things can be said about human erroneous actions, and in relation to joint systems it is particularly important to consider the distinction between two categories of erroneous actions, those that are system induced and those that are not. The latter can be called residual erroneous actions or erroneous actions stemming from person related causes (cf. Figure 1). The distinction is to some extent arbitrary; any error or event analysis can always be taken one step further, and that step may lead to a complete revision of the identified cause (Woods, Johannesen, Cook, & Sarter, 1994). A person-related cause may, for instance, be due to an inadequate working environment which, however, only shows it effects through a person-related cause. However, in practice, there are large classes of actions that clearly and easily fall into one or the other category.

The importance of the two categories is that although it is possible to do something effectively about system related causes, it is usually either impossible or at least very difficult to do something about person-related causes. If, for instance, the cause of an erroneous action is found to be the fluctuation of attention, there are few things one can do about that because it is an inherent characteristic of human cognition. If, on the other hand, the cause turned out to be a mismatch between a procedure and the design of the controls, something can be done. The situation will usually be less clear cut, but the principle prevails. Erroneous actions are valuable indicators of the quality of the MMI and the physical and functional interface. They should always be carefully examined rather than hastily assigned to simplified categories — in particular the garbage can called human error. The indicators must be understood correctly and interpreted carefully in order to avoid inaccurate or superficial solutions. This requires a good appreciation of the complexity of MMI, and of the many aspects that are important in human work with technology.

Images

Figure 1 : A taxonomy of human error concepts.

MODELING OF HUMAN BEHAVIOR

The modeling human behavior is a complex and ambitious task that first of all demands a clear distinction between the concepts of model and the concept of simulation. Even though the terms frequently are used as synonyms, they are quite different. Modeling and simulation are two complementary ways of representing a process, which can be either physical or cognitive. In both cases the purpose is to explain past events as well as to predict time evolution within a certain margin of uncertainty and given boundary conditions. A model is essentially a theoretical description of a process or a system based on a number of hypotheses and simplifying principles, which can be formulated as analytic or lexicographic expressions (the model language). A simulation is a concrete expression or instantiation of the model in a form that is controllable or executable, for example, for a practical application or for computation. In the domain of thermohydraulics, for instance, the models of mass-flow and heat behaviors are governed by the well known Navier-Stokes equations, but the description of real situations can only be obtained by using a computerized simulation based on parametrization of the theoretical model. In the field of human performance, a similar distinction must be maintained, although it must be acknowledged that most of the models have been inspired by the information processing paradigm in one way or another, hence are relatively easy to turn into simulations, although they are more often proposed than actually done.

A crucial problem tackled by research on joint cognitive systems and cognitive ergonomics is the development of models of operators in control of complex dynamic systems (e.g., Hollnagel, Mancini & Woods, 1986). The models are useful expressions of our understanding of human cognition at work and also as a foundation for building symbolic simulations of operators. The need to model MMI in the control of joint systems can be deduced from the solutions to the control of mechanical system as described by, for example, cybernetics. Linear system theory, for example, has shown that the optimization of system performance can only be achieved if the dynamics of the system to be controlled are known and if a performance criterion is defined (Conant & Ashby, 1970; Francis & Wonham, 1975). This has also been expressed by the maxim that “every good regulator of a system must be a model of that system.”

When an operator controls a complex system, several activities are necessary to keep the system in normal conditions, for instance, supervision, detection and diagnosis of perturbations, planning and execution of actions/procedures (Bainbridge, 1983; De Keyser, 1987; Edwards & Lees, 1974; Sheridan, 1985). These activities all put considerable demands to human cognition, hence present a challenge for system design. The facilitation of such activities can be achieved by including the simulation of human behavior among the techniques and methodologies for the design of procedures, interfaces, decision support systems.

There exists a number of theories and models of cognition, mostly derived from the domain of psychology, although with some help from the wider field of philosophy. The attempt to apply such theories through practical simulations for design or analysis purposes is a recent endeavor that has been accelerated by the availability of a highly suitable technology (cheap but powerful personal computers and high-level, object-oriented programming languages). The complexity of the models has increased to match the progressive evolution of operator roles in the context of their work environment, going from the purely manual control to the full supervisory control tasks of modern systems, as in the nuclear power and aviation domains. In the 1960s and 1970s, successful models of control engineering were developed to study and simulate manual control behavior. The then current theories of human cognition reflected the needs of that generation of control demands, based on information processing and supervision of remote and complex processes. Among the first generation of models, those based on control theory proved their value in the design of controls and displays (Kleinman, Baron, & Levison, 1971; McRuer & Jex; 1967); so did others based on signal detection theory and estimation theory (Sheridan & Ferrel, 1974). During the 1980s, the most applied models of cognition in the process control field were the step-ladder model of Rasmussen (1976, 1986; Rasmussen, Pejtersen, & Schmidt, 1990) and the underspecification theory of Reason (1990). A number of simulations were also developed using logic structures and mathematical approaches such as object oriented programming, blackboard architectures, fuzzy set theories and fuzzy logic, neural networks, and so forth. There are several reviews of the various approaches to modeling, made at different times and thus reflecting slightly different focuses, e.g., Pew, Baron, Feehrer, & Miller, (1977), Rouse (1980), Sheridan (1985), and the more recent review of Stassen, Johannsen, & Moray (1990).

A key issue related to models and simulation of human behavior is their applicability and usefulness. It is important to stress that a simulation of the operator’s decision making or behavior gives the designer and analyst more flexibility and better means to evaluate potential solutions. In particular, the cognitive simulation can be used to encompass a wider range of situations and thereby obtain information in a cheap and rapid manner, which might otherwise require long and costly field analyses and questionnaires. Moreover, the use of a simple theory of cognition, that is, a theoretical model that adheres to the premises of simplicity, provides a viable alternative that both satisfies the need of a reference model for simulation purposes and acknowledges the impossibility to contain in the model the whole richness of human decision making and performance.

This view has been expressed by a Minimal Modeling Manifesto (Hollnagel, 1993b) which provides a basis for a consistent approach to the modeling of humans in interactive systems, and which can be formulated as follows: “A Minimal Model is a representation of the main principles of control and regulation that are established for a domain — as well as of the capabilities and limitations of the controlling system” (p. 379).

A minimal model is concerned with control and regulation, hence with cognition at work. The model is minimal because it tries to make as few assumptions as possible, but not because the phenomena it addresses are minimal or simplified. A minimal model may be specific for a given application or domain, such as a nuclear power plant, but is linked to other minimal models by the notion that there will be strong commonalties across a set of applications, and possibly even across a set of widely different applications. This may eventually enable the construction of a common minimal model, such as a model that represents the basic characteristics of humans as controllers.

The advantages offered by the model/simulation approach must, however, not undermine the role and importance of actual observations and field analyses. These activities do, indeed, represent the fundamental and necessary basis for model and theory building in the first place. Empirical investigations are also necessary when a model has to be validated or substantiated — although from the minimal modeling point of view there is no strong need to validate the models. Observations and field analyses are further required as a basis for estimating the parameters of the simulation. The field experiments, if well planned and designed, usually produce a wealth of information that demands extensive analysis before its effects on the theoretical work are fully accomplished.

MODELING AND DEVELOPMENT OF COMPETENCE

The operator’s knowledge comes from basic education, training, and direct experience in working with the system. Through the basic education, the operator learns the physical laws underlying the process, the principles of functioning for the components, and the principles that govern the behavior of complex systems. It is, however, only through training and experience that the operator becomes acquainted with how the plant or the process conducts itself in normal and abnormal conditions. In training, individuals learn to operate on a system that is rarely working under normal operating conditions, whereas during everyday experience the opposite is the case. The experiences gained from training are thus only moderately representative of the system’s actual behavior. This dichotomy may create a mental bias in the operator, who may overestimate the plant capabilities and thus not be prepared to react when anomalies occur. A first step in overcoming this bias may be to develop models for training and simulations of the three basic components of the working socio-technical systems: the operator(s), the machines, and the environment. This is treated more extensively by Samurçay, in chapter 7, this volume.

The lesson learned from the simulation approaches developed so far have been particularly illuminating with regard to some fundamental aspects of the modeling of cognitive activities, such as the possibility of extrapolating and creating a knowledge base starting from bounded experience. In the perspective of developing a safety culture, the interesting features of such cognitive simulation is the possibility of obtaining expertise artificially condensed, in relation to a specific dynamic socio-technical reality. The question is often put as how one can describe what the operator knows (this, for instance, is the central topic in knowledge acquisition). It might, however, be considered whether it is not more important to find out what the operator does not know or what he knows incorrectly, because these knowledge “deficiencies” often are the cause of unwanted consequences or events (erroneous actions). Although it is next to impossible to describe what the operator knows, because this evokes the issues of tacit knowledge and what A1 calls the frame problem, it may be more manageable to describe what the operator does not know, that is, to identify the discrepancies between the expected and the actual knowledge. This discrepancy can be observed or deduced directly from experience, and it is this (lack of) knowledge, that is important — for performance studies and for design.

Another interesting possibility of simulating the interaction between cognitive and machine models is that this may strengthen the self-reflexive capabilities of the operator. Indeed, it can be envisaged that an operator could “look” through results of the cognitive model included in the simulator (have a look at himself while controlling the plant in situations which may not occur in reality). Such alternate (almost contra-factual) reality self-reflection simulates, in an artificial situation, second-level cognitive experience, that is, reflection on one’s own perceptions (Piaget, 1964), which enables the construction of more appropriate mental structures for interpreting the experience of the world.

REFERENCES

Ashby, W. R. (1956). An introduction to cybernetics. London: Methuen & Co.

Bainbridge, L. (1983). The ironies of automation. Automatica, 19, 775–780.

Conant, R. & Ashby, W. R. (1970). Every good regulator of a system must be a model of that system. International Journal of Systems Science, 1, 89–97.

Cook, R. I., Woods, D. D., McColligan, E., & Howie, M. B. (1990, Sept.). Cognitive consequences of ‘clumsy’ automation on high workload, high consequence human performance. Paper presented at the Fourth Annual Space Operations, Applications and Research Symposium. Washington, DC.

De Keyser, V. (1987). Structuring of knowledge of operators in continuous processes: case study of a continuous casting plant start-up. In J. Rasmussen, K. Duncan, & J. Leplat (Eds.), New technology and human error (pp. 247–260). Chichester, UK: John Wiley.

Edwards, E. & Lees, F. (1974). The human operator in process control. London: Taylor & Francis.

Francis, B. A. & Wonham, W. M. (1975). The internal model principle of linear control theory. Proceedings of 6th IFAC World Congress, Boston, MA, Oxford: Pergamon Press [Paper 43.5].

Hoc, J.M. (1993). Some dimensions of a cognitive typology of process control situations. Ergonomics, 36, 1445–1455.

Hollnagel, E. (1988). Cognitive models, cognitive tasks, and information retrieval. In I. Wormell (Ed.), Knowledge engineering (pp. 34–52). London: Taylor Graham.

Hollnagel, E. (1993a, Sept.). The design of reliable HCI: The hunt for hidden assumptions. Invited keynote lecture, HCI 93. Loughborough, UK.

Hollnagel, E. (1993b). Requirements for dynamic modelling of man-machine interaction. Nuclear Engineering and Design, 144, 375–384.

Hollnagel, E. (1993c). Human reliability analysis: context and control. London: Academic Press.

Hollnagel E., Mancini, G., & Woods, D. D. (1986). Intelligent Decision Support in Process Environments. [Nato ASI Series, Vol. 21]. Berlin: Springer Verlag.

Kleinman, D. L., Baron, S., & Levison, W. H. (1971). A Control Theoretic Approach to Manned-vehicle Systems Analysis. IEEE Transactions on Automatic Control, AC-16, 824–832.

Malin, J. T., & Schreckenghost, D. L. (1992). Making intelligent systems team players: Overview for designers [Tech. Rep. No. NASA 104751]. Houston, Texas: Johnson Space Center.

McRuer, D. T. & Jex, H. R. (1967). A Review of Quasi-linear Pilot Models. IEEE Transactions on Human Factors in Electronics, HFE-8, 231–249.

Meetham, A. R., & Hudson, R. A. (Eds.) (1969). Encyclopaedia of linguistics, information and control. Oxford: Pergamon Press.

Norman, D.A. (1986). Cognitive engineering. In D.A. Norman & S.W. Draper (Eds.), User centered system design: new perspectives on human computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.

Perrow, C. (1984). Normal accidents: living with high-risk technologies. New York: Basic Books.

Pew, R. W., Baron, S., Feehrer, C. E. & Miller, D. C. (1977). Critical Review and Analysis of Performance Models Applicable to Man-Machine-Systems Evaluation [Tech. Rep. No. 3446]. Cambridge, MA: Bolt, Beranek & Newman.

Piaget, J. (1964). Logique et connaissance scientifique [Logic and scientific knowledge]. Paris: Gallimard [Encyclopédie de la Pléiade].

Poundstone, W. (1993). Prisoner’s dilemma. Oxford, UK: Oxford University Press.

Rapoport, A. (1970). N-person game theory: Concepts and applications. Ann Arbor: The University of Michigan Press.

Rasmussen, J. (1976). Outlines of a hybrid model of the process operator. In T. Sheridan, & G. Johannsen (Eds.), Monitoring Behavior and Supervisory Control (pp. 371–384). New York: Plenum Press.

Rasmussen, J. (1986) Information processes and human-machine interaction. An approach to cognitive engineering. Amsterdam: North Holland.

Rasmussen, J., Pejtersen, A. M., & Schmidt, K. (1990, May). Taxonomy for cognitive work analysis. Paper presented at the First MOHAWC Esprit II-BR Workshop. Liège, Belgium.

Reason, J., (1990). Human error. Cambridge, UK: Cambridge University Press.

Rouse, W. B. (1980). Systems engineering models of human-machine interaction. Amsterdam: North Holland.

Sheridan, T. B. (1985, Sept.). Forty-five years of man-machine systems: history and trends. Keynote Address at the Second IFAC Conference on Analysis, Design and Evaluation of Man-Machine Systems. Varese, Italy.

Sheridan T. B., & Ferrel W. R. (1974). Man-machine systems: information, control and decision models of human performance. Cambridge, MA: MIT Press.

Stassen H. G., Johannsen G., & Moray N. (1990). Internal representation, internal model, human performance model and mental workload. Automatica, 26, 811–820.

Woods, D. D., Johannesen, L. J., Cook, R. I., & Sarter, N. B. (1994). Behind human error: Cognitive systems, computers and hindsight. Columbus, OH: CSERIAC.

1The term MMS will be used throughout this chapter to avoid clumsier expressions such as per-machine interaction. Similarly, the term MMI will be used to denote the interaction that takes place between the operator(s) and the machine(s).

2Among the things that can go wrong we must include the failure to write or draw what the intention was. There is a difference between using a pencil to write and using a pencil to write a brilliant essay or sketching a portrait. In the former case, the use of the pencil qua pencil is the goal. In the latter case, the use of the pencil serves another goal; it is the means to achieve a goal. The work of the operator involves both uses, although training often only emphasises the former.

3A slightly more complicated example is the use of a VCR — to say nothing of programming it. Although ostensibly a very simple system with a very simple function, it befuddles a large proportion of the population — from the person on the street to the (former) President of the United States, and sometimes even specialists in MMI!

4However, these days not everyone can write with a fountain pen.

5A more formal definition is: “a system in which the instantaneous value of a given state variable depends upon the values of thc same and other state variables at previous instances” (Meetham & Hudson, 1969, p. 655).

6It may be added that this problem is not peculiar to technological systems, but exist for other systems as well. As an example, consider the problems in accounting for the economy of a nation, the behavior of the stock market, and so forth.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.226.66