5

Symbolic AI Computer Simulations as Tools for Investigating the Dynamics of Joint Cognitive Systems

David D. WOODS

The Ohio State University

Emilie M. ROTH

Westinghouse Science and Technology Center

Based on our experience in developing a symbolic A1 computer simulation, we explore how such simulations can be used as a tool in the study of joint human-machine cognitive systems in field settings. We contend that cognitive simulation is best seen as a tool to aid a cycle of empirically based model development and model-based empirical investigations of joint cognitive systems. The chapter is organized around a series of principles or claims about simulations of cognition as related to complex human-machine systems.

The thesis of this chapter is that cognitive simulation is best seen as one of a set of complementary tools for the study of joint cognitive systems in field settings. Cognitive simulation is a technique invented by Newell and Simon (Newell & Simon, 1963; Newell & Simon, 1972; Simon, 1969,) where information processing concepts about human cognitive activities are expressed as a runnable computer program, usually through symbolic processing techniques (cf. Corker et al., 1986; Johnson, Moen, & Thompson, 1988; Kieras and Polson, 1985; Woods et al., 1987 for examples using symbolic processing techniques; cf. also Axelrod, 1984; Kirlik et al., 1989; Payne, Johnson, Bettman & Coupey, 1990 for examples using conventional programming techniques). The cognitive simulation can be stimulated by inputs from a domain scenario to generate computer system behavior which can be compared to observed human behavior for the same scenario. These computer simulations are more accurately referred to as simulations of cognition (cf. Cacciabue & Hollnagel, in chapter 4, this volume). But since Newell and Simon originally used the label ‘cognitive simulation’ we will sometimes retain their terminology.

The advantages of simulations of cognition revolve around the fact that building a runnable computer program (cf. Simon, 1969) forces the modeler to describe mechanisms in great detail. In particular, expressing a model as a runnable program forces the developer to make explicit the knowledge and processing that are implicit in the practitioner’s field of activity. Running the simulation through a scenario produces specific behavior that can be analyzed and compared to other data. As a result, it is possible to uncover a variety of consequences of the basic information processing mechanisms that are instantiated in the program. Furthermore, the resulting simulation can be run on a variety of scenarios, including scenarios that were not part of the original design set. Thus, the implications of assumptions or concepts about human cognitive activities captured by the simulation can be explored in a wide range of domain specific circumstances. As a result, one can see cognitive simulation as a method for linking more directly theory building and empirical investigations of human problem-solving activities in semantically rich domains.

One connotation often associated with cognitive simulation is that the information processing mechanisms bestowed on the computer are intended to be a model or theory about the information processing mechanisms of the human mind, for example, the physical symbol system hypothesis (Newell, 1980; Newell, 1990). In this sense of “cognitive simulation”, the computer program is seen as a formal theory about human cognitive processes. However, this is not the only sense in which computer simulations have been or can be used in cognitive science or in various scientific and engineering specialties (cf. Pylyshyn, 1991)

Several years ago, we faced the problem of developing an analytic strategy that would be useful to identify places/circumstances where erroneous situation assessments could develop and propagate during dynamic fault management as in nuclear power plant emergency operations (Woods & Roth, 1986). We discarded the notion that symbolic processing mechanisms must be a direct model of human cognition. Instead, we focused on other properties of a computer simulation approach in a project that later came to be called the Cognitive Environment Simulation (CES).

In this paper we will use our experience in that project and experience from related efforts to develop cognitive simulations to describe how A1 based computer simulations can be used as a tool in the study of joint cognitive systems in field settings. We contend that cognitive simulation is best seen as a tool in a cycle of empirically based model development and model-based empirical investigations of joint cognitive systems. As such, the computer program in itself has no special status as a specification of the cognitive process employed by people to accomplish the task at some level of description.1 We will attempt to explore the role of simulations of cognition by posing a series of principles or claims about cognitive simulation as related to complex human-machine systems.

THE ROLE OF COGNITIVE SIMULATION

Cognitive simulations are tools for investigation of cognition.

In the CES project, in collaboration with H. Pople, we tried to use symbolic processing computer programming techniques, as other computer programs have been used (e.g., Axelrod, 1984), as tools to explore the ramifications of some concepts about the cognitive activities that underlie human performance in different kinds of circumstances. The A1 was not to be seen as the model of human cognition, but rather as a language for expressing concepts about some of the cognitive factors at work in dynamic fault management. This is the sense in which we used the term cognitive simulation.

In other words, strip away from symbolic A1 its claims to be a model of mind and what you have left is a set of powerful techniques using symbolic programming for building computer programs that perform cognitive work. Examples of symbolic processing mechanisms used in Pople’s artificial intelligence performance system EAGOL2 that were found to be useful in the CES project include (a) distributed software agents with local information processing tasks, with local knowledge resources that are activated by particular triggering events, and who share working results through message passing (in effect, working in parallel on pieces of the problem); (b) ways to represent knowledge about physical processes, fault categories, disturbance propagation, operational goals, corrective responses; (c) qualitative reasoning techniques about how measured data on the state of the monitored process change given different patterns of control and fault-related influences acting on the process.

One can use these various symbolic processing mechanisms to build computer systems that perform cognitive work. When stimulated with input from a scenario (a temporal stream of the data about the state of the monitored process that is or could be available during an unfolding incident), the computer simulation can be made so that it carries out cognitive functions such as monitoring for changes in process state or diagnosis of underlying faults. For example, CES performs some of the cognitive functions involved in dynamic fault management (Woods, in press-b): it monitors and tracks changes in process state, identifies abnormal and unexpected process behaviors, builds and revises a situation assessment (what influences are currently acting on the monitored process), formulates hypotheses to account for unexplained process behavior, and formulates intentions to act based on its situation assessment.

While a cognitive simulation provides a compelling demonstration of the cognitive work entailed by the environment, the specific software mechanisms employed in the simulation do not constitute a theory of human cognition in that environment. For us, cognitive simulations specify theories, but are not theories. The simulation is a representation or realization of a set of concepts; a way to formalize the concepts so that one can explore and investigate the explanatory power of the concepts in a wide range of circumstances. Thus, cognitive simulations embody theories of human cognition at the level of cognitive competencies, not in the details of implementation. While the computer simulation can bring out in bold relief the cognitive functions required to operate successfully in the environment, the specific symbolic processing mechanisms employed to achieve those cognitive functions in the computer program do not in any sense constitute a theory of how people perform the same cognitive functions. The theory embodied in the computer simulation is intended to apply at the level of cognitive competencies (similar to Newell’s 1982 concept of the knowledge level) rather than at the symbol manipulation level (what Newell has referred to as the program level). The role of the cognitive simulation is to help get greater leverage from a set of concepts -- what do these concepts explain about the cognitive system in question (Elkind et al., 1990). Hence, the CES project began with and continues to evolve with reference to a model of the cognitive activities and demands of dynamic fault management tasks based on empirical studies and explanatory concepts (Roth, Woods & Pople, 1992; Woods, 1992; Woods, in press-b; Woods & Roth, 1986).

As Heil (1981) has pointed out in reference to computer programs as models of cognition, “… we must take care to avoid the error of supposing that descriptions of things done are really indirect descriptions of the mechanisms which get them done” (italics in original). It is the investigator’s intelligence in setting up a correspondence between the cognitive functions performed by the program and the hypothesized cognitive demands and activities in the field of practice that determines the usefulness of the simulation as a tool. Symbolic processing techniques are resources that can be used to build programs that carry out the cognitive functions which the investigator thinks are important in this setting or scenario. The modeling concepts exist separate from their instantiations within the simulation tool. Wielding the simulation in relationship to other sources of data provides the potential for learning about the dynamics of human performance in complex environments. This leads us to one criterion that determines if a computer program is to count as a cognitive simulation -- one must explicitly specify the concepts about cognitive activities and demands in the target situations that govern the development and evolution of the simulation.

When one sees cognitive simulations as tools for investigation, the implementation details of the simulation as computer program are of secondary interest. What are important are the competencies of the program in relation to a model of the cognitive activities and demands of the tasks in question. For example, CES builds and revises a situation assessment by keeping track of what influences it believes are currently acting on the monitored process. This includes both influences produced by control activities and those produced by hypothesized faults. The set of influences is used to compute expected and unexpected process behaviors. There is no claim that how the computer program fulfills this competency corresponds directly to mechanisms of human cognition. The modeling claim is that forming expectancies based on consideration of multiple influences is a cognitive function that goes on in dynamic fault management and that this cognitive function can account for the behavior observed in empirical studies. This example raises another criterion that must be fulfilled if a computer program is to count as a cognitive simulation -- one should specify the competencies of the program and explicitly map them onto concepts about the cognitive activities and demands of the field of activity in question (Roth & Woods, 1988).

Computer simulations of complex human-machine systems address distributed cognitive systems.

In actual fields of practice, the focus is rarely on the activities of a single cognitive agent. Cognitive work goes on in the context of multiple people, machines that perform aspects of cognitive work, and a variety of tools and external representations of systems, devices, or processes. In the CES project, we tried to shift the focus from modeling and simulating human cognitions to modeling and simulating cognitive systems that are distributed across multiple people, machines (e.g., A1 advisors), and external representations or cognitive tools that are shaped by the cognitive demands of the specific task domain (Hutchins, 1991; Woods & Roth, 1988). There are three parts to a cognitive systems view of a human-technical system: (1) the set of agents who perform cognitive work (this includes multiple people and algorithmic and heuristic machine information processors in supervisory control applications, e.g., Hutchins, 1990; Roth & Woods, 1988); (2) the external representation of the monitored process and the cognitive tools available which shape the cognitive strategies of the practitioners in the system (e.g., Hutchins, 1991; Woods, in press-a); (3) the demands of the task domain in terms of the challenges or constraints they pose for any cognitive agent or set of agents to function in that setting (Woods, 1988).

To be relevant, cognitive simulations must address how cognitive activities are distributed across multiple people and machines, how cognitive activities are shaped by characteristics of the available external representations and cognitive tools, and how cognitive activities are locally rational responses to the cognitive demands and constraints (e.g., competing goals) of the specific task domain (Hutchins, 1991; Woods, Johannesen, Cook, & Sarter, in press; Woods & Roth, 1988). Note that these are not three independent aspects. Modeling a joint cognitive system is about understanding how the interactions among these factors -- demands, external representation and tools as resources, individual strategies and the distribution of activities across agents -- mutually shape each other (cf. Hutchins, 1991; Roth & Woods, 1988 for two examples of studies that investigate the mutual shaping across these factors).

Computer simulations of complex human-machine systems can be used to explore how the demands of the specific task domain constrain or shape the behavior of cognitive systems.

Research on human error today often assumes that erroneous actions and assessments result from rational but limited or bounded cognitive processes (Woods, Johannesen, Cook, & Sarter, in press). People behave consistent with Newell’s principle of rationality -- that is, they use knowledge to pursue their goals (Newell, 1982). But, there are bounds to the data that they pay attention to, to the knowledge that they possess, to the knowledge that they activate in a particular context, and there may be multiple goals which conflict (e.g., bounded or limited rationality; Reason, 1987; Simon, 1969). Thus, one approach to modeling a cognitive system in a particular task context is to trace the problem-solving process to identify points where limited knowledge and processing resources can lead to breakdowns given the demands of the problem (Woods, 1990). A cognitive simulation can be an excellent tool for exploring different concepts about limits on cognitive processing (e.g., attentional bottlenecks; limited knowledge activation) in relation to the demands imposed by different kinds of problems that can occur in the field of practice. The cognitive simulation can be constructed then to allow the investigator to vary the knowledge resources and processing characteristics of a limited resource computer problem-solver and observe the behavior of the computer problem-solver in different simulated domain scenarios. This strategy depends on mapping the cognitive demands imposed by the domain in question that any intelligent but limited resource problem-solving agent or set of agents would have to deal with. The demands include the nature of domain incidents, how they are manifested through observable data to the operational staff, and how they evolve over time. Then, one can embody this model of the problem-solving environment as a limited resource symbolic processing problem-solving system.

In effect, with this technique one is measuring the difficulty or complexity posed by a domain incident, given some set of resources, by running the incident through the cognitive simulation (Kieras & Polson, 1985; Woods et al., 1990). In other words, the cognitive simulation supports a translation from the language of the individual field of practice to the language of cognitive activities -- what data needs to be gathered and integrated, what knowledge is required to be used and how is it activated and brought to bear in the cognitive activities involved in solving dynamic problems. In effect, the cognitive simulation yields a description of the information flow and knowledge activation required to handle domain incidents. One can investigate how changes in the incident (e.g., obscuring evidence, introducing another failure) affect the difficulty of the problem for a given set of knowledge resources. Conversely, one can investigate how changes in the knowledge resources (e.g., improved mental models of device function) or information available (e.g., integrated information displays) can affect performance.

While this approach has clear limitations, it does allow an overall assessment of the range and complexity of cognitive activities demanded by the situation. In the CES project we were able to deal with only a subset of the factors that make up a joint or distributed cognitive system. Workload management and the role of cognitive tools were two of the many requirements that we originally set but are not captured within the current version of the computer simulation. Nevertheless, in several instances we were able to elucidate information about the information flow and knowledge activation required to handle domain incidents with specific implications for the field in question. For example, we analyzed variations within one fault category to determine which are likely to be difficult diagnostically (Woods, Pople & Roth, 1990). We explored how different types of fault management situations where problem solution goes beyond rote following of procedures (Roth, Woods, & Pople, 1992). In using the cognitive simulation in a variety of incidents we learned about the role of practitioner expectations and violations of those expectations in guiding and simplifying diagnostic search (Woods, in press-b).

In their current stage of development, models of cognitive systems in natural settings are probably much more about developing a cognitive language for describing the task environment than about modeling specific internal psychological mechanisms. As Hogarth (1986) has commented, “Good models of decision behavior gain much of their explanatory power by elucidating the structure of the task being investigated” (p. 445). Even in traditional cognitive psychology and cognitive science there is a growing appreciation that the demands of the environment play a significant role in defining human performance and that significant insights can be gained by exploring the nature of the problem being solved using minimal information processing assumptions (cf. Anderson, 1990; Marr, 1982). “An algorithm is likely to be understood more readily by understanding the nature of the problem being solved than by examining the mechanism (and the hardware) in which it is solved” (Marr, 1982, p. 27). Given the range of cognitive activities that come into play in complex fields of practice, a model of task properties may be the critical bottleneck to progress (e.g., Hammond, 1988). Of course, models of the cognitive demands of fields of practice cannot be pursued independent from understanding the psychological processes that occur in those tasks; the two are mutually constrained (Simon, 1991; Woods, 1988). Cognitive simulation can be a powerful tool, in part because it links cognitive demands and cognitive activities together so that the dynamics of their interaction and interdependence can be explored.

Cognitive simulations are needed to explore the temporal dynamics of cognitive systems in relation to the temporal characteristics of incidents.

In dynamic environments, data comes in over time, changes, and occurs in the presence of other events. Faults propagate chains of disturbances that evolve and spread through the system (Woods, in press-b). Counteracting influences are injected by automated systems and by practitioners to preserve system integrity, to generate diagnostic information, and to correct faults. Information is based on change, events (behavior over time), and the response to interventions. Static models are incapable of expressing the complexity of cognitive functioning in dynamic environments -- the interaction of data-driven and knowledge-driven reasoning, the role of interrupts in the control of attentional focus, the scheduling of cognitive activities as workload bottlenecks emerge, the interaction of intervention and feedback on process response.

One can appreciate the complexities of the situation faced by practitioners only through developing runnable computer systems that must deal with the dynamics of problems. In the CES project it became clear that to follow and control dynamic events it was necessary to use a computer program with elaborate mechanisms adapted to problems that evolve and change over time. For example, CES contains mechanisms (a) for tracking over time interactions among multiple influences acting on the monitored process (e.g., qualitative reasoning); (b) for tracking when automation should activate or inactivate various control systems and how goal priorities change through an incident; (c) for projecting the impact of a state change on future process behavior to create temporal expectations such as reminders to check whether the expected behavior is observed, or, more importantly, not observed. Interestingly, in one study that used CES as a tool (Roth et al., 1992), the factors that made the class of incidents difficult could be found only through an analysis of the dynamics of the incident in relation to the dynamics of the joint cognitive system.

Cognitive simulation provides the potential to explore the dynamic interplay of problem evolution and cognitive processing. This may be critical to be able to make progress on the problem of how does control of attention work (Woods, 1992). Many demanding tasks such as dynamic fault management are practiced in a cognitively noisy world, where very many stimuli are present which could be relevant to the problem-solver (Woods, in press-b). There are both a large number of data channels and the signals on these channels, usually are changing (i.e., the raw values are rarely constant even when the system is stable and normal). Given the nature of human attentional processes and the large amounts of raw data, human monitors focus selectively on a portion of the field of data. These types of task worlds demand facility with reorienting attention rapidly to new potentially relevant stimuli. Given the large field of changing data, one challenge in building a cognitive simulation is getting the program to ignore “uninteresting” changes and to focus only on “interesting” ones. The problem is that what is interesting depends on a set of factors that include domain specific knowledge, the state of the problem-solving process, the relevant goals, and tradeoffs about how to respond to trouble under conditions of irreducible uncertainty, time pressure, and the possibility of very negative outcomes.

For example, in the CES project, we found that diagnostic engines developed for static situations, when applied to dynamic processes, bog down in pursuing too many irrelevant data variations. What is needed is some front end capability to recognize which out of a set of changes (or when the absence of a change) should initiate diagnostic search. In CES this is accomplished by forming a situation assessment that consists of the ‘known’ influences acting on the monitored process. This influence set is used to evaluate process changes to determine if the change was expected given the influence set or unexpected. Unexpected findings act as a trigger for diagnostic search mechanisms whose charter is to determine an explanation for the unexpected finding. This kind of process greatly reduces the amount of diagnostic work required to track changes in the monitored process. Thus, an important area for future work is how to build cognitive simulations that can exercise control of attention in principled ways.

Investigations using cognitive simulations must be part of a larger cycle of empirically based model development and model-based empirical investigations of joint cognitive systems.

CES was part of a process of trying to expand and deepen our understanding of the cognitive activities and demands of fault management tasks, especially what makes fault management difficult and vulnerable to breakdown. Several activities contributed to this learning process. We learned from the struggles to get a research A1 software system to behave reasonably in a supervisory role when stimulated by data about a developing incident in a specific domain (nuclear power emergency operations). We learned from the differences between the behavior of the simulation and empirical data available from previous studies of nuclear power emergency operations (Woods, Pople, & Roth, 1990). We learned from new data collected on human performance in this setting, motivated in part to better understand how the simulation should function (Roth, Woods, & Pople, 1992).

The value of a cognitive simulation is in wielding it to learn about the dynamics within a particular cognitive system or the dynamics of cognitive systems in general. In wielding a cognitive simulation one designs and carries out a kind of experimental investigation. One collects data about the behavior of the simulation across a range of problems/domain scenarios that represent a sample of the space of problems or scenarios that could occur (Simon, 1969). Setting up this type of investigation requires thoughtful consideration of which set of scenarios to use, what empirical data can serve as contrasting cases, and what comparisons to make that will lead to new learning (again, see Axelrod, 1984, for a classic example of the use of a computer simulation as a tool for investigation in association with empirical data and the evolution of modeling concepts). One designs a cognitive simulation study. When does a computer program function as a cognitive simulation? When you can describe the study that you did with it, and when you can specify what you learned from that effort.

Wielding a cognitive simulation is the equivalent of an empirical study. What is of primary interest is the methodology, results, and implications of an investigation using cognitive simulation as a part of the study. This implies that there is a need to observe the behavior of the computer program across a range of problems. As a result, development of a map of the problem space is needed so as to know what kinds of problems should be posed to the cognitive simulation. As in any study, there are uncertainties and limits; the skill in study design is how to balance uncertainties and limits so as to tentatively learn about joint cognitive systems.

It is important to emphasize that wielding a cognitive simulation as part of an empirical study does not depend on achieving a correspondence between computer and human protocols. Information lies in the differences between practitioner behavior and the behavior of the cognitive simulation relative to the concepts instantiated (and those not instantiated) in the simulation.

A cognitive simulation is always a partial incomplete realization of the modeling concepts.

A cognitive simulation captures some aspects of the cognitive factors at work; it addresses only some part of the task environment.

The scope of cognitive activities that occur in complex fields of practice is daunting. These diverse cognitive activities are not modular either, whether considered from the point of view of developing A1 software or from a psychological point of view. Often the target domain itself is very complex — nuclear power plants and emergency operations; flightdecks in commercial aviation; anesthetic management under surgery. This means that the scope of knowledge and strategy development is very great. The breadth of human cognitive activities that come into play in just a single scenario or subtask can be extremely wide. Even a relatively simple subtask may involve several components that are normally studied or modeled in isolation in the laboratory (Woods & Roth, 1988).

Furthermore, our state of understanding of cognition is not static. The evolution of knowledge in the field of cognitive science can be expected to overturn working assumptions, approximate models, or specific concepts that were used to guide the development of the cognitive simulation. All of these factors point out that it is very difficult to see a cognitive simulation as a finished static system.

A cognitive simulation, as a complex software system, is subject to all of the difficulties that can plague software development.

Some of the classic advantages of cognitive simulation — detailed specification of processes, openness to inspection, runable across many scenarios, comparability to data from people (Simon, 1969; Newell & Simon, 1972) — depend in part on the assumption that the underlying computer program exists as an objective entity. But when is a cognitive simulation program finished enough to count? As a large software development project, one can be subject to software stability problems and difficulties in iterative refinement across specific scenarios, among other problems. A1 software can be tailored relatively easily to run well for a particular scenario or set of human data, but generate inadequate or nonsensical behavior when confronted with a substantially different scenario.

Constraints on software development interact with the breadth and diversity of cognitive activities to be captured (claim 6.1). The next scenario is likely to invoke new types of knowledge about the domain and to bump into cognitive competencies that were not evident in the cases addressed up to that point in the development process. Sometimes, these new elements may be addressed through modular additions, but the likelihood of success depends on how well the architecture is designed relative to the desired cognitive competencies. And then, in wielding the simulation, one is likely to discover new things about cognitive systems which may very well demand changes in the simulation program. Given these and other pragmatic factors in software engineering, it is very difficult to see a cognitive simulation as a finished system. Rather, cognitive simulations are always in a state of evolution.3

In principle, an advantage of a cognitive simulation is that the processes used by the computer program in response to input about the evolution of the monitored process are open to inspection by the investigators. But what aspects of the program need to be seen? It is important to remember that a protocol of the behavior of a cognitive simulation in response to an incident is but one kind of many possible reports on the behavior of the computer program (Dennett, 1968). Similarly, a protocol on the behavior of a person or a team during a particular incident is but one kind of report on the behavior of those individuals. The specific constraints on access differ between human and machine cognition, but a protocol of a machine’s cognitive activities has the status of a report about its activities.

Theoretically, there can be many kinds of descriptions of the behavior of a computer program (Newell, 1982). Pragmatically, the software system designer develops mechanisms that give the program observable behavior and provides tools for inspecting the changes in the internal states of the program. What constrains these software capabilities or tools? The developer’s judgment about the factors that are important in the program’s behavior? The developer’s judgment about the appropriate grain of description? The debugging mechanisms and programmer’s interface provided by the development environment? The project resources or the level of effort that remains to be devoted to building an investigator’s interface? The developer’s notions of what is a good human-computer interface for monitoring the performance of a cognitive simulation?

Again, it is theoretical concepts that stand outside of the cognitive simulation itself, theoretical concepts about the cognitive demands and strategies within a cognitive system, that provide a means to decide in a principled manner the kinds of reports of simulation behavior that should be available (examples of where this has been done include Newell’s SOAR system as an instantiation of his theoretical concepts about human cognition; Newell, 1990). This defines another criterion for when a computer program counts as a cognitive simulation -- the developers/investigators must provide theoretically motivated or methodologically motivated capabilities for accessing the behavior of the software system.

A cognitive simulation, as a computer program, may contain a variety of ancillary mechanisms that are needed simply to make the software run, but are not related to the concepts under investigation. However, these auxiliary assumptions can affect or drive the behavior of the computer program (Newell, 1990; Simon, 1991). This creates a problem: How does one know when the behavior of the simulation is due to the concepts instantiated in the program and when is it due to auxiliary aspects required to have a runnable computer program (Newell, 1982)?

Newell (1982) makes this point by distinguishing different levels of realization for a computer program. His point reminds us that cognitive simulations can be used to specify theories, but are not theories (i.e., Newell’s distinction between the knowledge level and the program level of description of a cognitive system). For computer simulations, the program is not and cannot be the model; it is only one realization of the concepts. Newell’s work on a unified theory of cognition as embodied in the SOAR software system illustrates that, to combat the knowledge level “problem”, the mapping between concepts and program mechanisms must be as explicit as possible. The mapping between concepts and program mechanisms becomes a bridge that creates a productive link between the particular and the universal in cognition: General concepts need to be articulated into a system that could meet the cognitive demands of a field of practice, but a system only can do such cognitive work in the particular as a response to a particular scenario sampled from a larger space of problems.

Approximate and incomplete cognitive simulations, if wielded intelligently, can contribute useful results both in general and in particular for an application area.

Validation of a cognitive simulation is probably a fruitless question. Rather than ask validation questions (is this the correct model?), we contend that questions of usefulness and fruitfulness are the measure of the value of a cognitive simulation (what did it help you discover or learn or test?). The value is not as an entity in itself, but rather the power of discovery that it can provide as part of a kind of difference equation in the hands of intelligent investigators. The information lies in the differences between empirical results and simulation results (Roth et al., 1992), in comparisons across different scenarios and conditions.

A cognitive simulation, at some stage of evolution and in the context of a designed study, represents hypotheses about the dynamic interplay across the parts of a cognitive system. The value lies in enhancing our ability to engage in an empirical confrontation when the target of interest is cognition embedded within some complex, multifaceted field of practice.

Similarly, Don Norman and others (Barnard, Wilson, & MacLean, 1988) have pointed out that for many complex settings what is needed are approximate models of cognition and human-computer cooperation. As Barnard et al. (1988) expressed it, such models must capture empirical phenomenon of interest at molecular rather than atomic levels of description. Such models are tentative and subject to revision as knowledge and results in cognitive science evolve. The use of such models should contribute to the evolution of knowledge in cognitive science. Such models are explicit in regard to the elements of analysis and the principles on which it is based. The process of creating and using a cognitive simulation contributes to making concepts and their interaction explicit, if done in the ways outlined above. Achieving these criteria requires linking modeling and empirical techniques in a reinforcing and interactive cycle. In the end, the yield from cognitive simulation techniques depends more on the intelligence of the investigator than on the sophistication of the tool per se.

CONCLUSIONS

The value of a computer simulation of cognition is not to simply have it, but rather to use it. Running the simulation through a scenario produces specific behavior that can be analyzed and compared to other data. As a result, it is possible to uncover a variety of consequences of the information processing mechanisms that are instantiated in the program.

Since cognitive simulations are a tool for investigation, the architectural aspects of the simulation as a computer program are of secondary interest. It is relatively easy to build an A1 system that does some parts of the cognitive work involved in various fields of practice. What is important is the competencies of the program in relation to a model of the cognitive activities and demands of the tasks in question. If a computer program is to count as a cognitive simulation, these competencies must be specified. What parts of the cognitive work involved in a complex task like dynamic fault management should be simulated? What are the cognitive activities involved in the area of dynamic fault management? How should one use an imperfect simulation of cognition (remember it is utopian to quest after the perfect cognitive simulation) to learn more about the dynamics of joint cognitive systems in general or about a particular joint cognitive system?

Cognitive simulation is one potentially powerful tool in a cycle of empirically based model development and model-based empirical investigations of joint cognitive systems. But it is not a panacea; nor is it without pitfalls. Has the technique been useful? Yes, there are outposts of investigations that involved cognitive simulations and that seem to have added to the research base on joint cognitive systems. However, it is not clear whether the insights gained depended on the role of the cognitive simulation so much as on the insight of the investigators (e.g., the cognitive simulation may play the role of a demonstration vehicle for insights worked out through careful observation and analysis by the investigators).

Areas where cognitive simulation studies can contribute to advancing our understanding of joint cognitive systems in context include: (a) how do temporally evolving situations, as compared with static one shot decision situations, create different cognitive demands and provide opportunities for different cognitive strategies?; (b) how is attentional focus managed in fields of activity that are data rich and involve multiple interleaved tasks?; (c) how do possibilities for action constrain cognitive systems?; (d) what is the contribution of perceptual or recognition driven or pattern processing to cognition (rather than modeling cognition as decoupled from perception)?; (e) how does effort or cognitive cost play a role in cognition systems given finite resources available to human or machine agents within a cognitive system?

While cognitive simulation is a powerful technique and while there are many computer systems being developed that claim to be cognitive simulations, the burden lies with the developers/investigators to use them in a fashion that adds to and stimulates a process of critical, cumulative growth of knowledge about the dynamics of joint cognitive systems in the field.

REFERENCES

Anderson, J.R. (1990). The adaptive character of thought. Hillsdale, NJ: Lawrence Erlbaum Associates.

Axelrod, R. (1984). The evolution of cooperation. New York: Basic Books.

Barnard, P., Wilson, M. & MacLean, A. (1988). Approximate modelling of cognitive activity with an expert system: A theory-based strategy for developing an interactive design tool. The Computer Journal, 31, 445–456.

Corker, K., Davis, L., Papazian, B., & Pew, R. (1986). Development of an advanced task analysis methodology and demonstration for Army aircrew/aircraft Integration (Tech. Rep. No. BBN 6124). Cambridge, MA: Bolt Beranek and Newman.

Dennett, D. (1968). Computers in behavioral science: Machine traces and protocol statements. Behavioral Science, 13, 155–161.

Elkind, J., Card, S., Hochberg, J., & Huey B. (Eds.). (1990). Human performance models for computer aided engineering. New York: Academic Press.

Hammond, K.R. (1988). Judgment and decision making in dynamic tasks. Information and Decision Technologies, 14, 3–14.

Heil, J. (1981). Does cognitive psychology rest on a mistake? Mind, 90, 321–342.

Hogarth, R.M. (1986). Generalization in decision research: The role of formal models. IEEE Systems, Man, and Cybernetics, 16, 445.

Hutchins, E. (1990). The technology of team navigation. In J. Galegher, J. R. Kraut, & C. Egido, (Eds.), Intellectual teamwork: Social and technological foundations of cooperative work (pp. 191–220). Hillsdale, NJ: Lawrence Erlbaum Associates.

Hutchins, E. (1991). How a cockpit remembers its speed (Tech. Rep.). University of California at San Diego, Distributed Cognition Laboratory.

Johnson, P.E., Moen, J.B., & Thompson, W.B. (1988). Garden path errors in diagnostic reasoning. In L. Bolec & M. J. Coombs (Eds.), Expert system Applications (pp. 395–428). New York: Springer-Verlag.

Kieras, D.E., & Polson, P.G. (1985). An approach to the formal analysis of user complexity. International Journal of Man-Machine Studies, 22, 365–394.

Kirlik, A., Miller, R.A., & Jagacinski, R. (1989). A process model of skilled human performance in a dynamic uncertain environment. In Proceedings of IEEE Conference on Systems, Man, and Cybernetics, 1, 1–23.

Marr, D. (1982). Vision. San Francisco: Freeman.

Newell, A. (1980). Physical symbol systems. Cognitive Science, 4, 135–183.

Newell, A. (1982). The knowledge level. Artificial Intelligence, 18, 87–127.

Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.

Newell, A., & Simon, H.A. (1963). GPS, a program that simulates human thought. In E. A. Feigenbaum & J. Feldman (Eds.), Computers and thought (pp. 279–293). New York: McGraw Hill.

Newell, A., & Simon, H.A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.

Payne, J. W., Johnson, E. J., Bettman, J.R., & Coupey, E. (1990). Understanding contingent choice: A computer simulation approach. IEEE Systems, Man, and Cybernetics, 20, 296–309.

Pylyshyn, Z.W. (1991). The role of cognitive architectures in theories of cognition. In K. VanLehn (Ed.), Architectures for intelligence: The 22nd Carnegie Mellon Symposium on Cognition (pp. 189–223). Hillsdale, NJ: Lawrence Erlbaum Associates.

Reason, J. (1987). A preliminary classification of mistakes. In J. Rasmussen, K. Duncan, & J. Leplat (Eds.), New technology and human error (pp. 15–22). Chichester, UK: Wiley.

Roth, E.M., & Woods, D.D., (1988). Aiding human performance: I. Cognitive analysis. Le Travail Humain, 51, 39–64.

Roth, E.M., Woods, D.D., & Pople, H.E. (1992). Cognitive simulation as a tool for cognitive task analysis. Ergonomics, 35, 1163–1198.

Simon, H.A. (1969). The sciences of the Artificial. Cambridge, MA: MIT Press.

Simon, H.A. (1991). Cognitive architecture and rational analysis: Comment. In K. VanLehn (Ed.), Architectures for intelligence: The 22nd Carnegie Mellon Symposium on Cognition (pp. 25–39). Hillsdale, NJ: Lawrence Erlbaum Associates.

Woods, D.D. (1988). Coping with complexity: The psychology of human behavior in complex systems. In L. P. Goodstein, H. B. Andersen, & S. E. Olsen (Eds.), Mental models, tasks and errors (pp. 128–148). London: Taylor & Francis.

Woods, D.D. (1990). Modeling and predicting human error. In J. Elkind, S. Card, J. Hochberg, & B. Huey (Eds.), Human performance models for computer-aided engineering (pp. 248–274). New York: Academic Press.

Woods, D.D. (1992). The alarm problem and directed attention (Tech. Rep.) Columbus, OH: The Ohio State University, Cognitive Systems Engineering Laboratory.

Woods, D.D. (in press). Towards a theoretical base for representation design in the computer medium: Ecological perception and aiding human cognition. In J. Flach, P. Hancock, J. Caird, & K. Vicente (Eds.), The ecology of human-machine systems. Hillsdale, NJ: Lawrence Erlbaum Associates.

Woods, D.D. (1994). Cognitive demands and activities in dynamic fault management: Abductive reasoning and disturbance management. In N. Stanton (Ed.), The human factors of alarm design. (pp. 63–92). London: Taylor & Francis.

Woods, D.D., Johannesen, L., Cook, R.I., & Sarter, N. (in press) Behind human error: cognitive systems, computers and hindsight. Wright-Patterson AFB, OH: Crew Systems Ergonomic Information and Analysis Center.

Woods, D.D., & Roth, E.M. (1986). Models of cognitive behavior in nuclear power plant personnel, Vol. 2 (Tech. Rep. No. NUREG-CR-4532). Washington DC: U.S. Nuclear Regulatory Commission.

Woods, D.D., & Roth, E.M. (1988). Cognitive systems engineering. In M. Helander (Ed.), Handbook of human-computer interaction (pp. 3–43). New York: North-Holland.

Woods, D.D., Roth, E.M., & Pople, H.E. (1987). Cognitive environment simulation: An artificial intelligence system for human performance assessment, Vol. 2 (Tech. Rep. No. NUREG-CR-4862). Washington DC: U.S. Nuclear Regulatory Commission.

Woods, D.D., Pople, H.E., & Roth, E.M. (1990). The cognitive environment simulation as a tool for modeling human performance and reliability, Vol. 2 (Tech. Rep. No. NUREG-CR-5213). Washington DC: U. S. Nuclear Regulatory Commission.

ACKNOWLEDGMENTS

Our ideas about the role of cognitive simulation evolved throughout the Cognitive Environment Simulation project especially through collaboration with Harry Pople, Jr. We would also like to thank Bernard Pavard for many wonderful discussions of the potential of cognitive simulation techniques. Kevin Corker and the computational human factors team at NASA Ames Research Center have also influenced our ideas on the role of cognitive simulation. The preparation of this paper was supported, in part, by the Aerospace Human Factors Research Division of the NASA Ames Research Center.

1Note that the relationship between concepts and formalization as a computer program is different here as compared to the relationship between concepts and formalization in other areas of study where the formal expression of the model (e.g., in mathematical terms) is intended to be the theory and other types of descriptions (e.g., verbal specifications; models or analogies) are considered to be mere approximations intended to foster comprehension or application.

2EAGOL is a proprietary product of SEER Systems.

3Though small by many yardsticks, the CES project ran into many of the troubles that can arise in software engineering, especially symbolic processing software development, including software stability problems and difficulties in iterative refinement across specific scenarios. The MIDAS project to develop an integrated suite of computer simulations for the design of human-machine systems at NASA was a much larger software engineering effort (e.g., Corker, 1993). For a counter-example where the software development was kept simple and small but still resulted in successful simulation-based investigations of the dynamics of a joint cognitive system see Benchekroun and Pavard (this volume).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.213.44