4

Simulation of Cognition: Applications

Pietro Carlo CACCIABUE

CEC Joint Research Centre, Ispra, Italy

Erik HOLLNAGEL

Human Reliability Associates

This paper gives an overview of the current state of the art in man-machine systems interaction studies, focusing on the application of simulation of cognition dedicated to the highly automated working environments and the role of humans in the control loop. In particular, it is argued that the need for sound approaches to design and analysis of Man-Machine Systems (MMS) has given rise to two categories of modeling: macro-cognition and micro-cognition models. These architectures, theoretically, can cover all possible varieties of models of cognition. A number of existing model developments are described from the viewpoint of their domains of application, analyzing their validity and the scope of application.

The interest in and concern for human cognition has gathered momentum for a number of years and seems at the present time — the early 1990s — to have become a sweeping force. This is mainly due to the recognition from industrial applications that human cognition is an important constituent of system performance. The change started with the slow acknowledgment that man-machine systems (MMSs) were of a different nature than purely technical systems. Human factors engineering (ergonomics) had developed an approach to adjusting the interface between the human operator and the machine so that the worst violations of human performance integrity were avoided. But human factors engineering still considered a MMS as a human + a machine, hence as a simple aggregation of distinct elements, rather than as a joint system with properties that might differ from what was known for either part. The increasing concern for the prevention of unwanted consequences in complex systems slowly forced the focus of attention to turn from the overt manifestations of the interaction and the interface itself to the processes and functions that caused and shaped the interaction (Hollnagel, 1993). As a result, a new set of terms such as cognition, cognitive modeling, cognitive engineering, and so on, have practically become buzz-words in the general process control community.

Moreover, the need to account for all factors which play an important role in the control of a plant has led to modeling the man machine system using computer based simulations. This need is naturally explained, by analogy, with the evolution of analytical techniques for technological systems: In linear system theory, for example, it has been shown that the optimization of system performance can only be achieved if the dynamics of the system to be controlled are known and a performance criterion is defined; in optimal filter theory, one needs also to be informed about the statistics of the disturbances to be compensated. It is thus natural to conclude that, for the correct control of a plant, a model of the total man-machine system should become available (Mancini, 1986). There exist in the literature many reviews of the various approaches and methodologies dedicated to this modeling issue: Examples of well structured and complete analysis of the state of the art, made at different times and thus reflecting slightly different focuses, are the works of Pew, Baron, and colleagues (1977), of Rouse (1980), of Sheridan (1986) and the more recent survey of Stassen, Johannsen, and Moray (1990).

In this paper we will review the existing models of cognition from the viewpoint of the domains to which they have been applied, identifying also the scope and quality of the analysis that they can perform. We will concentrate firstly on the identification of two broad categories of models of cognition, that is macro-cognition and micro-cognition approaches, which can include all modeling developments. We will then discuss models of cognition in relation to four domains of application, namely Design, Analysis and Evaluation, Training, and On-line Support. Finally, some conclusions will be made identifying the major problems and possible ways of solution for the application of models of cognition.

THE SIMULATION OF COGNITION

Human cognition is not a single thing or a single phenomenon. It can be viewed from many aspects and in many contexts. There are many different motivations for being interested in human cognition and these will to a large extent determine where the focus is put and which techniques and methods are applied. One particular interest is the simulation of cognition in the service of a purpose — in order to solve a practical problem.

The simulation of cognition, as a scientific endeavor, is quite old. Many of the developments which have led to the present state-of-the-art have their roots in the simulation of cognition that took place in the late 1960s and early 1970s — primarily at the Carnegie-Mellon University (Newell & Simon, 1972). This type of simulation must be distinguished clearly from the type of simulation we are talking about here. One way of doing that is to make a distinction between micro-cognition and macro-cognition.

Micro-Cognition And Macro-Cognition

Micro-cognition is here used as a way of referring to the detailed theoretical accounts of how cognition takes place in the human mind. This distinguished it from the concern about cognition that takes place within Artificial Intelligence. Here the focus is on the “mechanisms of intelligence” per se, rather than the way the human mind works. Micro-cognition is concerned with the building of theories for specific phenomena and with correlating the details of the theories with available empirical and experimental evidence. Typical examples of micro-cognition are studies of human memory, of problem solving in confined environments (for example, the Towers of Hanoi), of learning and forgetting in specific tasks, of language understanding, and so on. Many of the problems that are investigated are “real,” in the sense that they correspond to problems that one may find in real-life situations - at least by name. But when they are studied in terms of micro-cognition the emphasis is more on experimental control than on external validity, on predictability within a narrow paradigm rather than on regularity across conditions and on developing models or theories that go in depth rather than in breadth. Micro-cognition relinquishes the coupling between the phenomenon and the real context to the advantage of the coupling with the underlying theory or model.

Macro-cognition refers to the study of the role of cognition in realistic tasks, that is in interacting with the environment. Macro-cognition only rarely looks at phenomena that take place exclusively within the human mind or without overt interaction. It is thus more concerned with human performance under actual working conditions than with controlled experiments. Typical examples of macro-cognition are diagnosis, controlling an industrial process, landing an aircraft, writing a program, designing a house, planning a mission, and so forth. Some phenomena may, in principle, belong to both categories. Examples are problem solving, decision making, communication, information retrieval, and so on. But if they are treated as macro-cognition the interest is more on how they are performed and how well they serve to achieve their goals than on the details of what goes on in the mind while they are performed.

The simulation of micro-cognition is therefore radically different from the simulation of macro-cognition. The former is found in most of the early A1 systems, such as EPAM and the General Problem Solver, and has more recently reached its high point in the theory of unified cognition — SOAR (Newell, 1990). The latter is found in — as yet — only a few cases such as COSIMO (Cacciabue et al., 1992), CES (Woods et al., 1987), and AIDE (Amalberti and Deblon, 1992).

Macro-Cognition And Simulation

The simulation of (macro-)cognition is always done with a specific practical purpose in mind. The concern is therefore basically one of maintaining a sufficient correspondence between the known or observed regularities of the target phenomenon and the outcome of the simulation. In technical terms, the simulation must be isomorphic to the phenomenon being modeled. There are many reasons why a simulation of cognition is used rather than, for example, a study of what happens in practice. It depends on the specific purpose (cf. later). Typical reasons are that access to a simulation may be easier than access to a workplace, that a simulation may cover a wider range of situations and events than can easily be observed, that a simulation may be controlled and restarted “at will” (for example, using snapshots and breakpoints or partial backtracking), and that a simulation provides a better record of what went on. (Many of these advantages are, of course, contingent upon how well the simulation has been made.)

The simulation of cognition should not be performed for the purpose of studying cognition in itself, but rather for the purpose of studying the kinds of performance where cognition plays a significant part. The relevant occurrences are those that manifest themselves in overt and observable performance, rather than those that only take place in the mind of the agent. It would therefore be more correct to talk of the simulation of cognition-based performance; but since this is rather cumbersome, we will continue to use the term “simulation of cognition.” To emphasize that the focus is on macro-cognition rather than micro-cognition, the following definition is offered:

The simulation of cognition can be defined as the replication, by means of computer programs, of the performance of a person (or a group of persons) in a selected set of situations. The simulation must stipulate, in a pre-defined mode of representation, the way in which the person (or persons) will respond to given events. The minimum requirement to the simulation is that it produces the response the person would give. In addition the simulation may also produce a trace of the changing internal mental states of the person.

When the simulation of cognition is contemplated as the possible solution to a problem, it is important to make clear whether the simulation is going to correspond to empirical or theoretical knowledge. The difference between the two views is captured in Figure 1. In the first case the simulation corresponds to the dominant relations that are found in the empirical knowledge — which in turn is based on the regularity of the work environment (cf. also the notion of requisite variety, Hollnagel & Cacciabue, 1991). In the second case the simulation corresponds to the required relations that are derived from the underlying theory — which in its turn is based on the ontology of cognition. There should, hopefully, be a substantial correspondence between the empirical knowledge and the theories of cognition, hence also between the dominant relations and the required relations. If this correspondence is insufficient, something is terribly wrong. In any case it is safer to base the simulation on the dominant relations; theories may come and go, but the regularities and constraints of the real world are less likely to change suddenly.

images

FIGURE 1: Role and scope of Simulation of Cognition

DOMAINS OF APPLICATIONS

In order to clarify better the scope of human behavior simulation and how it is used in many domains it is important to bear in mind that a model primarily is an efficient way to structure certain knowledge. The knowledge attempts to capture a part of the reality by means of a set of abstractions, such as mathematical relations, words, graphical symbols, and so on. A model is often used without an underlying theory, simply to describe some links between input and outputs. This is, however, an improper use since it provides no way of assessing the coherence of abstractions. When reviewing the domains of applications of human behavior simulation, it is therefore crucial to realize the limits and the terms of a model in relation to its goals.

In principle, there exist four domains of application, each of which has fostered several developments and simulations of human behavior modeling at various levels of complexity and generality. These are the domains of Design, Analysis and Evaluation, Training, and On-line Support.

Design

In the design of a process or a plant, the consideration of human cognition primarily affects the interfaces for the Supervision and Control (S&C) systems and the procedures for the plant management. Moreover, the evaluation and the optimization of the overall design of the plant requires the availability of a model to predict performance during both normal and contingency situations.

Design of Interfaces

The introduction of the human component for the design of display and information interfaces has been developing since the 1960’s, with the application of models based on “information theory” (Senders, 1964) and “queuing theory” (Carbonell, 1966) for the evaluation of the cost of performing observation of information displays. The main weakness of these approaches was that they relied on environment properties or exogenous factors, such as uncertainty, system knowledge, degree of expertise, and strategic thinking, rather than on endogenous factors. The use of models based on Optimal Control Theory (OCT) and Optimal Estimation Theory (OET) (Kleinman & Curry, 1977; Stein & Wewerinke, 1983) already represented a step forward. These models implied the existence of an internal model, even if simple and related to the physical system, and of the estimation aspect that underlies the detection factor and orients the information acquisition.

The current techniques for the design of interfaces aim at formulating display systems which support the operator’s perception and attention and enhance the understanding of physical processes and conceptual reasoning about plant behavior (Lind, 1991; Woods, 1984). The basic theme of these approaches is that the information content of the data depends on the state of the viewer person rather than on the properties of the display alone; the two important principles to analyze interface design are therefore (1) the study of the global properties of the stimulus and (2) a concept driven or top-down analysis starting from the person’s internal model about the physical processes and the plant under control. The use of simulation of cognition in the design of user interfaces is nowadays a practice, based on a large amount of literature, work experience, and methods (Helander, 1988; Life, Narborough-Hall, & Hamilton, 1991; Weir & Alty, 1991).

Design of Procedures

The design of procedures in complex systems has for many years been a useful way to optimize plant production and safety and minimize operator workload (in order of priority!). The task analysis, which breaks down a task into sequences of “elementary” actions and control decisions, has progressively changed the focus from the analysis of the actual control procedure selected at the end of the reasoning process toward the aspects of human cognition related to the operator’s decision making (Rasmussen, 1986), typically described as the performance of an internal information processing mechanism. This evolution of the design process of procedure closely follows the evolution in the role of the operator from a prime contributor in direct control of the physical process to a supervisor and decision maker of an automated system governed by computers. In this sense, the previously mentioned models, based on optimal control theory and queuing theory, which have been extensively used in design and analysis of procedures in domains like civil aviation (Baron et al, 1980 or Rouse, 1977), have to be updated by models based on Cognitive Task Analysis (CTA), which dedicate greater attention to the sequence of the information process involved in a control decision task.

Broadly speaking, there are two ways to tackle the issue of CTA for the design of procedures in complex plants control (Grant & Mayes, 1991; Hollnagel, 1989a; Kieras, 1988; Moray et al., 1992):

•    The first way applies techniques for analyzing tasks which can be encoded in computer programs. In this way criteria other than modeling human cognition are considered, for example, logical structuring of rules, validation, maintenance, and so on. The first attempt to CTA modeling in this way was the GOMS (Goals, Operators, Methods, and Selection rules) type analyses (Card et al., 1983), which have been followed by models like ICS (Interactive Cognitive System) of Barnard (1985). These models try to account, in a “fully orchestrated way,” for the processes of perception, memory, understanding, problem solving, and action during the performance of tasks in complex working environments.

•    The second way represents an extension of the traditional task analysis, whereby the constituents processing resources for cognitive activity are studied and a general relationship between the properties of the human cognitive system and the characteristics of overt behavior are defined (Hollnagel & Woods, 1983).

Prototyping and Paradigms of Performance Predictions

Apart from the specific domains of interfaces and of procedures, the work in design of complex systems has also given rise to a variety of approaches oriented to the prediction of human performance during the management of the plant; models, paradigms, and frameworks have been developed at different levels of abstraction. In this category one of the most widely used frameworks for categorizing cognition developed in the 1980’s is the Skill-, Rule-, and Knowledge-based (SRK) behavior model of Rasmussen (1986). The SRK framework, tightly coupled to the “step ladder” representation of the decision making, represents an often used model of human behavior adopted for simulating operator response in many fields and for many purposes: from design to safety and reliability, training and decision support development. Other approaches which have attempted to provide architectures describing human cognition in general are: Newell’s SOAR model (1990), Anderson’s ACT model (1983), Brunswik’s lens model (Hammond et al., 1980), and the previously mentioned ICS model (Barnard, 1985).

Most of these simulation architectures are models of micro-cognition, as defined earlier. In order to be used as reasonable representations of human behavior in complex situations they may, for instance, be combined with a particular mathematical formalism and tailored to a specific working environment. In particular, the fuzzy set theory (Zadeh, 1965) has been one of the major methods adopted by many authors for the representation of imprecise and uncertain behavior of human beings. Examples of models using fuzzy sets theory are MESSAGE (Boy & Tcssicr, 1985) or KARL (Knaeuper & Rouse, 1985).

Analysis and Evaluation

Risk and Reliability Analysis.

The Probabilistic Safety Assessment (PSA) of plants is gradually becoming a common practice in domains other than the nuclear energy production, for example, as in chemical plants and avionics systems. A PSA consists of two main parts: the systems reliability assessment and the human reliability assessment (HRA). The latter, although tackled only in a second phase of the evolution of PSA, has become of primary importance due to the fundamental role of operators in recent accident and events which have produced catastrophic consequences. In HRA the consideration of the cognitive aspects of human behavior has developed in a similar way as in design. Indeed, the first systematic attempt to evaluate the probability of human erroneous actions, the method THERP (Swain & Guttman, 1980), does not include a consideration of the dynamic cognitive factors that affect operator behavior. The development of a second generation of approaches of HRA methods, following the THERP approach (Dougherty, 1990, 1991), has not succeeded to modify this fundamental drawback and even a model like HCR (Hannaman et al., 1984), which is based on the SRK model, fails to provide a proper treatment of human cognition.

The natural evolution of the human reliability approaches has led to attempts to solve the problem of HRA in terms of the “Reliability of Cognition” (Hollnagel, 1991). These solutions account for the dynamic effects of endogenous and exogenous factors on the inappropriate decision making and action (Cacciabue et al., 1993; Roth et al., 1991).

In parallel to the evolution of the HRA methods, the taxonomies dedicated to human erroneous behavior have also gradually modified their focus from the simple omission/commission alternative to more structured taxonomies of work environment based on cognitive analysis (Norman, 1981; Rasmussen et al., 1981; Reason, 1990). These new approaches to studying the reliability of cognition need to be coupled to appropriate taxonomies accounting for the socio-technical factors of the working environment (Bagnara et al., 1991) and for well structured definitions of causes, or “genotypes,” and manifestations, or “phenotypes,” and consequences of human erroneous actions (Hollnagel, 1993).

Evaluation of Decisions in Accident Analysis

In the domain of accident analysis and evaluation of decision, a number of models exist which aim to simulate human behavior in general terms, that is, to represent the human response during accidental conditions in terms of actions and decisions taken to control the plant.

The main characteristic of these models is that they focus on the simulation of accidents or very dynamic situations and consequently can create some assumptions which make the cognitive simulation more manageable. Many of the models described in the previous section, such as PROCRU or MESSAGE or even the simulations of micro-cognition, can describe the operator’s behavior in accident cases. Due to their origin they are, however, less easily customized to the simulation of accident analysis than some recent developments carried out in the domain of nuclear reactor safety and avionics.

In this respect, there exists very little work in the literature from the 1960’s and 1970’s. The recent developments have thus immediately been focused on the simulation of cognition rather than on behavioral aspects of operator response to transients. In particular, the model AIDE (Amalberti & Deblon, 1992) has been developed for the simulation of meta-cognitive processes governing the decision making of fighter pilots during the planning and execution of military missions. The models CES (Woods et al., 1987) and COSIMO (Cacciabue et al, 1992) have been developed for the analysis of nuclear power plants operators during the management of accidents. Although these two models are based on different cognitive theories, they attempt, as does AIDE, to simulate operator behavior on the basis of the cognitive principles which govern the overall behavior and which are activated during and in consequence of the dynamic interaction of the operator with the plant. CES has already been adapted to the reliability analysis, and thus represents a step forward toward the inclusion of cognitive factors in reliability analysis. An even more recent approach is the SRG (Hollnagel & Cacciabue, in press), which is an ambitious attempt to provide a general simulation framework for man-machine interaction.

These latter model developments, even if based on some cognitive principles, and therefore related to a theory of cognition, have to be regarded as macrocognition approaches. The reasons for such classification are that, contrary to the “hard” micro-cognition models, they are based and driven by a strict connection with the working environment in which they are being developed. This implies that the contribution of field observations as well as the demand of realism and practicality in simulation are considered key factors in the development of human behavior paradigms. These conditions are indeed accurately avoided, as contradictory to the general philosophy of the theory, when models of micro-cognition have been developed.

Training

The change in the nature of work not only has an effect on design practices and safety analysis approaches, as described above, but also demands operator training that focuses on problem solving and on the flexible use of multi-purpose Information Technology (IT). The use of IT depends on properly developed cognitive skills more than on perceptual-motor functions. In particular, at the level of responding to familiar situations the question is one of qualitative matching repertoires of pre-compiled control procedures with the perceived cues from the environment. For unfamiliar situations the problem solving and planning ability of specific persons are to be enhanced by increasing the ability to use qualitative models and combine relevant actions into appropriate tasks.

The need to consider training as a means to develop operators’ ability to formulate mental strategies has been demonstrated several years ago focusing, for example, on diagnostic performance (Shepherd et al., 1977). However, the use of cognitive modeling of operator behaviors, similar to those adopted for system design and evaluation, has not been adopted until recently for training. One of the first attempts to use qualitative models for matching system requirements and human resources in training programs was developed in the early 1980s (Rouse, 1982).

Today the role and use of IT and cognitive science approaches to training practices is quite diffuse (Bainbridge & Ruiz-Quintanilla, 1989). The impact of psychological and socio-technical factors on operator performance and the importance of including cognitive processes analysis in training methods is widely accepted. It is used at all levels of training practices such as: the description of event sequences in the task, the development of new skills, the generation of new working methods or procedures, the support to learners, and the motivation for continued learning (Bainbridge, 1989). Even the use of simulators for training, which is a common practice in many complex technology-like avionics and nuclear power plants control, can be improved and optimized by considering the cognitive factor in the planning and by using the knowledge of operators while performing training sessions (Leplat, 1989).

While the introduction of IT in the control loop has enhanced the importance of cognition and thus has affected training practice, it has also generated an impact in the training tools themselves. Indeed, the idea of a Computer Assisted Interactive Instruction (CAII) (Crowder, 1959) system has been developing since the late 1950s — although initially within a behavioristic viewpoint. The current attitude towards computer based training systems is to develop an Intelligent Tutorial System (ITS) by which the learner is able to learn according to his prior knowledge, his ability and preferences, and his motivations (Ruiz-Quintanilla, 1989). In one word the ITS must be supported by a cognitive simulation of the student. The simulation should be as complete as possible, including state of knowledge, typical errors (bug catalogue), level of understanding, frequent misunderstandings, preferred learning method, and so on. This represents a very difficult and complex task which may not be fully achievable. However, the many existing models and paradigms allow the development of systems which, in specific and limited applications, can be rather useful. The technique of ITS represents the most advanced form of utilization of IT equipment for training purposes. This technique has found a much greater application in the domain of support to decision making, which is the last domain of application of simulation of cognition that we will consider here.

On-Line Support

The simulation of cognition plays a role in on-line support in two different ways. Firstly, as a way of tailoring the interaction to match the situational demands. Secondly, as a basis for the artificial intelligence that often is required of these applications.

Unlike the previous applications, on-line support has also a need for a timely response. In design and analysis, for example, there is no urge to respond rapidly. The simulation of cognition can, in principle, take its own time since it is not coupled to an external dynamic process. In training there may be a certain need to match the pace of the student in the actual training situation; however, the tempo of that process is usually rather unhurried and pauses or delays can be introduced without disrupting the training beyond repair. In on-line support such luxuries cannot be afforded.

The need for a rapid response has an impact on the simulation of cognition: It needs to be similarly rapid. As a consequence, the search is more for methods and solutions that work fast and efficiently, rather than for solutions that give complete answers. This moves the goal for simulation of cognition even further away from the micro-cognition discussed above. The first priority is to get something that works.

We may summarize the use of simulation of cognition in on-line support as follows:

•    Expert Systems (ES). ESs are used in on-line support for a limited number of tasks, although the variations are many. The tasks typically have to do with procedure generation (in contrast to procedure design), with diagnosis, and with planning. In some cases the simulation of cognition is included explicitly to improve the functionality of the system for example, in providing explanations, in formulating advice, and so on.

•    Intelligent Decision Support Systems (IDSS). The changing role of the operator from an active participant in the loop to a more passive monitor of safety and control systems has produced an emphasis on decision making as the pivotal cognitive function. Decision making was seen as crucial not only for actually making decisions on the spot, but also for diagnosis, problem solving, planning, and scheduling. During the 1980’s there had been a series of activities, sponsored by the NATO Scientific Affairs Division, which had looked at various aspects of human performance in process supervision and control. This led to two conferences that specifically focused on the design and use of intelligent decision support (Hollnagel et al., 1986; Hollnagel et al., 1988).

Earlier studies of decision making and decision support had very much been influenced by the existing normative and descriptive theories of decision making (Lee, 1972) and several computer-based systems had been built. None of these had employed simulation of cognition as an option. They had tried to provide decisions support according to acknowledged decision rules or principles of choice, but had not included adaptation in any notable sense. The developments of Artificial Intelligence in the late 1970’s and the early 1980’s (which to some measure had continued and extended the earlier trend of cognitive simulation from the Carnegie-Mellon school) did, by the mid 1980’s, provide a set of methods which could conceivably improve the solutions. Out of this grew the notion of joint cognitive systems (Woods, 1986) which provided a new paradigm for the design of Intelligent Decision Support Systems.

Since the late 1980’s there has been a considerable number of IDSSs built and installed. This tendency has been most visible in the nuclear field, but a similar development has taken place in a number of other cases, for example, the Pilot’s Associate project or various space related projects. The extent to which simulation of cognition actually takes place varies, but in general the emphasis has been on very well-defined modeling — perhaps as a way of complying with the demand for rapid response. Some of the best examples are found in Japanese systems (e.g., Monta et al., 1985).

•    Action Monitoring. This differs from the previous categories by having the clear goal of mapping or modeling the operator’s intentions. The result of that can then be used in various ways, for example, for error detection, formatting of advice, display control, procedure following, and so on. Action monitoring, or intent recognition, is a growing field which usually is based on well-known techniques such as plan recognition. A few systems have been built for demonstration purposes (Hollnagel, 1989b; Masson & De Keyser, 1992) but none have been used in practice. Action monitoring represents a different side of the simulation of cognition. It is not the main stream of cognition (the primary process) that is in focus, but rather the way in which it is controlled (the secondary process). The interest is not in modeling what the operator does but why he does it. It is therefore, in a certain sense, the simulation of meta-cognition (Valot & Amalberti, 1992).

•    Enhancement of Man Machine Interaction (MMI). The scope of this type of application is to increase the robustness of the MMI by enabling the system to detect and recover from minor errors made during the interaction. It differs from action monitoring in having a very short time span, that is, limited to the duration of the current action rather than extended to the duration of the current plan or strategy. The problem is a very real one: Everyone makes mistakes in using the interactive devices that are part and parcel of computers, process control, communication equipment, navigation systems, and so on. In most cases the mistakes can be recovered because the person detects it almost immediately and/or because the system is sufficiently forgiving. But there are a number of cases where these conditions are not true; a notable one is Air Traffic Control (ATC). In these cases it is important if the system itself can include the feature of error tolerance.

In this field the simulation of cognition is concentrated on the need to detect inconsistent actions (e.g., typing mistakes, incorrectly used control keys) and to interpret ambiguous commands or queries. The emphasis is on the typical or frequent modes of error rather than on the long-term intentions. The simulation of cognition can therefore benefit from extensive studies of, for example, the performance of temporarily or permanently disabled people, of known distributions of incorrect actions in various categories, and so on.

DIMENSIONS OF SIMULATION

The simulation of cognition can be characterized on a number of dimensions, which serve as a basis for comparing different approaches to simulation and to choose the one that is best suited to a given purpose. The main dimensions are the following:

•    Static versus dynamic simulation. This dimension denotes whether the simulation is of a static or dynamic nature. All simulations do, by definition, contain a dynamic aspect in the sense that they reproduce how the person’s response corresponds to how the event unfolds. The difference between dynamic and static is therefore whether the simulation would develop or change in the absence of an external event, or whether it will only change in response to an external event.

•    Normative versus descriptive simulation. We have already discussed this in the beginning under the issue of micro-cognition and macro-cognition. It was concluded there that a simulation of cognition should be descriptive rather than normative. It is nevertheless impossible to ensure a clean separation between the two, and it is consequently relevant to try to describe the balance that the simulation achieves between them.

•    Level of granularity of the simulation. A simulation can be either very detailed in the way in which it accounts for cognition, or remain on a relatively high level with few details. This is typically referred to as the level of granularity. The constituent granules or grains correspond to the elementary cognitive functions and/or structures (e.g., knowledge elements). These can obviously be described and simulated on different levels, for instance as a decision making processor as the steps in decision making.

•    Degree of specificity of the simulation. This could also be called the degree of generality. The dimension is used to describe how specific (or conversely, how general) the simulation is. This is of importance when the transfer of the results is considered, that is, whether the results are valid for only that situation or set of events that were simulated, or whether it is possible to draw conclusions for other, related types of situations.

These dimensions can be used to describe different types of simulations, but also to establish requirements for a simulation based on an analysis of user needs.

CONCLUSIONS

In this chapter we have attempted to place into perspective the concept of simulation of cognition, which has been defined as the way to describe the cognitive and behavioral processes occurring during human activities. In particular we have identified two main categories, namely macro- and micro-cognition, in which the models can be classified according to their explicit attempt either to give a detailed theoretical account of how cognition takes place in the human mind (micro-cognition) or to refer to the study of the role of cognition in realistic tasks (macro-cognition).

The application of models of cognition has been analyzed in four domains of application, namely Design, Analysis and Evaluation, Training and On-line support.

For each of these four domains a number of problems and specific needs have been analyzed. The major issue identified in almost all domains has been the adaptability of the model to the real working domain of interest. In this respect it can be observed that the macro-cognition models are more suitable to the designers and ergonomists of man-machine systems, even if a considerable effort is still required for including in the models the feedback of the experience from field analysis and the ability to simulate complex reasoning processes. Indeed, in general these are still correlated to a more immediate (and easier to simulate) behavior or reaction to cues of the environment.

On the other hand, the micro-cognition approaches, even if less relevant as far as practical applications are concerned, are also important and necessary in the realm of simulation because they maintain open, as well as continuously active, the attempt to represent, by a general architecture of cognition, the basic processes and the primitives of cognition which are the basis of all decision making processes and action performance.

Ideally, the optimal model of cognition should be defined and characterized by means of the correlations of the micro-cognition approaches but should be able to tackle and solve the practical problems typically represented by the macro-cognition approaches. In other words, the two categories of models are in a converging mode of development, which will require some more advances in the philosophical thinking on cognition as well as in the methods and computerized means of simulation.

REFERENCES

Amalberti, R., & Deblon, F. (1992). Cognitive modeling of fighter aircraft process control: A step towards an intelligent onboard assistance system. International Journal of Man-Machine Studies, 36, 639–671.

Anderson, J.R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.

Bagnara, S., Di Martino, C., Lisanti, B., Mancini, G. & Rizzo, A. (1991). A human error taxonomy based on cognitive engineering and social occupational psychology. In G.E. Apostolakis (Ed.), Proceedings of the International Conference on Probabilistic Safety Assessment and Management (PSAM) (pp. 513–518). New York, NY: Elsevier.

Bainbridge, L. (1989). Development of skill, reduction of workload. In L. Bainbridge & S.A. Ruiz Quintanilla (Eds.), Developing skill with Information Technology (pp. 87–116). Chichester, UK: Wiley.

Bainbridge, L., & Ruiz Quintanilla, S.A. (Eds.). (1989). Developing skill with Information Technology. Chichester, UK: Wiley.

Barnard, P.J. (1985). Interacting cognitive subsystems: a psycholinguistic approach to short-term memory. In A. Ellis (Ed.), Progress in the psychology of language (pp. 25–53) Hillsdale, NJ: Lawrence Erlbaum Associates.

Baron, S., Muralidharan, R., Lancraft, R., Zacharias, G. (1980). PROCRU: A model for analysing crew procedures in approach to landing. (Tec. Rep. NAS No. 2-10035). Ames, CA: NASA

Boy, G.A., & Tessier, C. (1985). Cockpit analysis and assessment by the MESSAGE methodology. Proceedings of the second IFAC Conference on Analysis, Design and Evaluation of Man-Machine Systems (pp. 73–79). Oxford: Pergamon Press.

Cacciabue, P.C., Decortis, F., Drozdowicz, B., Masson, M. & Nordvik, J.P. (1992). COSIMO: A cognitive simulation model of human decision making and behavior in accident management of complex plants. IEEE Transation System Man & Cybernetics, 22, 1058–1074.

Cacciabue, P.C., CarpignaNo. A., and Vivalda C. (1993). A Dynamic Reliability Technique for Error Assessment in Man-Machine Systems. International Journal of Man Machine Studies, 38, 403–428.

Card, S.K., Moran, T. P., & Newell A. (1983). The psychology of Human Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.

Carbonell, J. R., (1966). A queuing model of many-instrument visual sampling. IEEE Trans. Human Factors Electron., 4, 57–164.

Crowder, N.A. (1959). Automatic tutoring by means of intrinsic programming. In E. Galanter (Ed.), Automatic teaching: The state of the art, (pp. 127–155). New York: Wiley.

Dougherty, E.M. (1990). Human reliability analysis. Where shouldst thou turn? Reliability Engineering & System Safety, 29, 283–299.

Dougherty, E.M. (1991). Issues of human reliability in risk analysis. In G.E. Apostolakis (Ed.), Proceedings of the International Conference on Probabilistic Safety Assessment and Management (PSAM) (pp.699–704). New York, NY: Elsevier.

Grant, S., & Mayes, T. (1991). Cognitive task analysis? In G. R. S. Weir & J. Alty, (Eds.) HCI and complex systems, (pp. 145–164). London, UK: Academic Press.

Hammond, K.R., McClelland, G.H., & Mumpower J. (1980). Human judgment and decision making. New York: Hemisphere Publishing, Fredrick A. Praeger.

Hannaman, G.W., Spurgin, A.J., & Lukic, Y.D. (1984). Human cognitive reliability model for PRA analysis. (NUS-4531). San Diego, CA: NUS Corporation.

Helander, M., (Ed.). (1988). Handbook of human computer interaction. Amsterdam, The Netherlands: Elsevier Science Publishers.

Hollnagel, E. (1989a, November). Performance improvement through cognitive task analysis. Paper presented at the Workshop on Task-Oriented Approach to Human Factors Engineering, Noorwijk, The Netherlands.

Hollnagel, E. (1989b). Action monitoring and plan recognition: The response evaluation system (RESQ). (Tech. Rep. No. P857-WP6-Axion-099). Birkeroed, Denmark: Computer Resources International.

Hollnagel, E. (1991). Cognitive ergonomics and the reliability of cognition. Le Travail Humain, 54, 305–321.

Hollnagel, E. (1993). Human Reliability Analysis: Context and Control. London: Academic Press.

Hollnagel, E., & Cacciabue, P.C. (1991, September). Cognitive modelling in system simulation. Paper presented at the Third European Conference on Cognitive Science Approaches to Process Control, Cardiff, UK.

Hollnagel, E., & Cacciabue, P.C. (in press). Modelling cognition and erroneous actions in system simulation contexts. International Journal of Human Computer Studies

Hollnagel, E., Mancini, G & Woods, D.D. (Eds.) (1986). Intelligent decision support in process environments, NATO ASI Series. Berlin: Springer-Verlag.

Hollnagel, E., Mancini, G, & Woods, D.D. (Eds.). (1988). Cognitive engineering in complex dynamic worlds. London: Academic Press.

Hollnagel, E., & Woods, D. D. (1983). Cognitive systems engineering: New wine in new bottles. International Journal of Man-Machine Studies, 18, 583–600.

Kieras, D.E. (1988). Towards a practical GOMS model methodology for user interface design. In M. Helander (Ed.), Handbook of human computer interaction, (pp. 322–367). Amsterdam, The Netherlands: Elsevier Science Publishers.

Kleinman, D. L., & Curry, R. E. (1977). Some new control theoretic models for human operator display modelling. IEEE Transation System Man & Cybernetics, 7, 1074–1103.

Knaeuper, A., & Rouse, W.B. (1985). A model of human problem solving in dynamic environments. IEEE Transation System Man & Cybernetics, 15, 708–719.

Lee, S.M. (1972). Goal programming for decision analysis. Berlin: Auerbach.

Leplat, J. (1989). Relations between task and activity in training. In L. Bainbridge & S.A. Ruiz Quintanilla (Eds.), Developing skill with Information Technology (pp. 125–130). Chichester, UK: Wiley.

Life, A., Narborough-Hall, C., & Hamilton I. (Eds.), (1991). Simulation and the user interface. London: Taylor & Francis.

Lind, M. (1991). Representation and abstractions for interface design using multilevel flow modelling. In G.R.S. Weir & J. Alty (Eds.), IICI and complex systems (pp. 221–239). London: Academic Press.

Mancini, G. (1986). Modelling humans and machines. In E. Hollnagel, G. Mancini & D. D. Woods (Eds.), Intelligent decision support in process environments (pp. 307–323). NATO ASI Series. Berlin: Springer-Verlag.

Masson, M., & De Keyser, V. (1992, June). Human error: Lesson learned from a field study for the specification of an intelligent error prevention system. Paper presented at the Annual International Ergonomics and Safety Conference. Denver, CO.

Monta, K., Fukutomi, S., Itoh, M., & Tai I. (1985, September). Development of a computerized operator support system for boiling water reactor power plants. Paper presented at the International topical meeting on computer applications for nuclear power plant operation and control. Pasco, WA.

Moray, N., Sanderson, P.M., & Vicente, K.J. (1992). Cognitive task analysis of complex work domain: a case study. Reliability Engineering and System Safety, 36, 207–216.

Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.

Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall

Norman, D.A. (1981). Categorization of action slips. Psychological Review, 88, 1–15.

Pew, R.W., Baron, S., Feehrer, C.E., & Miller D.C. (1977). Critical review and analysis of performance models applicable to man-machine-systems evaluation (Tech. Rep. No. 3446). Cambridge, MA: Bolt, Beranck & Newman.

Rasmussen, J. (1986). Information processes and human-machine interaction: An approach to cognitive engineering. Amsterdam: North Holland.

Rasmussen, J., Pedersen, O.M., Mancini, G., Carnino, A., Griffon, M., & Gangolet, P. (1981). Classification system for reporting events involving human malfunctions (Tech. Rep. No. RISOE - M - 2240, EUR-7444 EN). Luxembourg: Commission of the European Communities.

Reason, J. (1990). Human error. Cambridge, UK: Cambridge University Press.

Roth, E.M., People, H.E., Jr., & Woods, D.D. (1991). Cognitive environment simulation: a tools for modelling operator cognitive performance during emergencies. In G.E. Apostolakis (Ed.), Proceedings of the International Conference on Probabilistic Safety Assessment and Management (PSAM) (pp.959–964). New York, NY: Elsevier.

Rouse, W. B. (1977). Human-computer interaction in multi-task situations. IEEE Transactions System Man & Cybernetics 7, 384–392.

Rouse, W. B. (1980). Systems engineering models of human-machine Interaction. Amsterdam: North Holland.

Rouse, W. B. (1982). A mixed-fidelity approach to technical training. Journal of Educational Technology Systems, 11, 346–385.

Ruiz Quintanilla, S.A. (1989). Intelligent tutorial systems (ITS) in training. In L. Bainbridge & S.A. Ruiz Quintanilla (Eds.), Developing skill with information technology (pp. 329–338). Chichester, UK: Wiley.

Senders, J. W. (1964). The human operator as a monitor and controller of multidegree of freedom systems. ERE Transactions in Human Factors in Electronics, 7, 103–106.

Shepherd, A., Marshall, E. C., Turner, A., & Duncan, K.D (1977). Control panel diagnosis: A comparison of three training methods. Ergonomics, 20, 347–361.

Sheridan, T.B. (1986). Forty-five years of man-machine systems: history and trends. Keynote Address. Proceedings of the Second IFAC Conference on Analysis, Design, and Evaluation of Man-Machine Systems (pp. 6–14) Oxford, UK: Pergamon Press.

Stassen, H.G., Johannsen G., & Moray, N. (1990). Internal representation, internal model, human performance model and mental workload. Automatica, 26, 811–820.

Stein, W., & Wewerinke, P. (1983). Human display monitoring and failure detection: control theoretic models and experiments. Automatica, 19, 189–211.

Swain, A. D., & Guttman, H.E. (1983). Handbook on human reliability analysis with emphasis on nuclear power plant application (Tech. Rep. No. NUREG/CR-1278. SAND 80-0200 RX, AN). Albuquerque, NM: Sandia National Laboratories.

Valot, C., & Amalberti, R. (1992). Metaknowledge for time and reliability. Reliability Engineering and System Safety, 36, 199–206.

Weir, G. R. S., & Alty, J. (Eds.). (1991). HCI and complex systems, London, UK: Academic Press.

Woods, D.D. (1984). Visual momentum: A concept to improve the cognitive coupling of person and computer. International Journal of Man-Machine Studies, 21, 229–244.

Woods, D.D. (1986). Paradigms for intelligent decision support. In E. Hollnagel, G. Mancini & D. D. Woods (Eds.), Intelligent decision support in process environments, (pp. 230–254) NATO ASI Series. Berlin: Springer-Verlag.

Woods, D.D., Roth, E. M., & Pople, H., Jr. (1987). Cognitive environment simulation: An artificial intelligence system for human performance assessment. Volumes 1 - 2 (Tech. Rep. No. Nureg-CR-4862). Washington D.C: US-NRC.

Zadeh, L.A. (1965). Fuzzy sets. Information & Control, 8, 338–353.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.72.245