7

Conceptual Models for Training

Renan SAMURÇAY

CNRS - University of Paris 8

There is a general consensus that operators of complex systems use and control these production tools by forming mental models of the system which can differ from those used by system design engineers. The aim of this chapter is to develop the idea that a conceptual model is a kind of system model that combines the properties and functionalities of both the operator’s mental model and real operating models. The present discussion centers on the relevance of these conceptual models for operator training, particularly in the domain of dynamic environment management tasks.

In current technical environments, human operators are used to working with artificial systems that have been designed to fulfill functions corresponding to designers’ intentions. These artificial systems cover devices used by practically everyone such as computers, calculators, and so on, but also complex systems which are used in the field of production and which require more professional users such as NPP, industrial processes, and so forth. These systems are designed on the basis of existing engineering models which are more or less calculable and carry out expected goals and functions. As a general rule, these models describe the theoretical normal operations of the systems.

In real operations the role of human operators is either to use these devices as tools to accomplish various tasks (such as e-mail or word processor users), or to intervene in the process of production by adjusting deviations of system functioning from the theoretical model (such as NPP operators or aircraft pilots). In this latter case operators are not mere users but are on a par with the system as regards the functions to be fulfilled for task achievement.

It is often argued that users and operators interacting with such systems use and control them by forming mental models of the system which differ from those used to design the systems. Although the concept of mental models has gained a broad audience since the seventies, there is no one definition or formalism to represent such models, as shown by different overviews on this topic (Gentner & Stevens, 1983, Goodstein et al., 1988; and Rogers et al., 1992). The aim of this chapter is not to give a complete explanation of a mental model, but to introduce the idea of “conceptual model” as a sort of device or system model which combines the properties and functionalities of both the operator’s mental model and real operating models. The second aim is to discuss the relevance of these conceptual models for the training of operators, particularly in the domain of dynamic environment management tasks.

This chapter aims to discuss various models developed to describe complex system functioning. A theoretical frame is proposed to categorize these models and discuss their relevance for training purposes.

TASKS, ACTIVITIES, AND MODELS

The main function of models is to enable us to make predictions about external events before carrying out an action. They also help us understand and explain the reality we observe and act on. These models can be more or less calculable, formalized, predictive or descriptive depending both on the state of knowledge about the phenomena to be modeled and the language of description used. As stressed by Hollnagel (1988), the nature of these models depends to a great extent on the characteristics of the object system and the purpose of modeling.

Models in complex tasks

To characterize the complexity of a given task domain such as a dynamic environment supervision task, the four dimensions emphasized by Woods (1988) can be used:

- dynamism of the system (the changes occurring in the system are not all introduced by the operator’s actions, the system has its own dynamics, the nature of the problem to be solved can change over time; multiple on-going tasks can have different time spans);

- the number of parts and the extensiveness of interconnections between the parts of variables (a given problem can be due to multiple potential causes and can have multiple potential consequences);

- uncertainty (the available data about system functioning can be erroneous, incomplete, and ambiguous; future states may not be completely predictable);

- risk (the consequences of operators’ decisions may be catastrophic or costly as regards the expected task goals).

It is now well known that to manage such complex and dynamic systems, operators need to build and use mental models of the system that provide them with an appropriate level of abstraction to deal with these highly demanding cognitive activities; in this task they are usually assisted by different operative tools which take over part of the cognitive load. The adequacy between the operative tools and the activity supported by them will be discussed below. Mental models form the basis for the operational knowledge which itself constitutes the core of professional competence, as discussed by Rogalski in this volume (see chapter 8). They are constructed either by explicit training or by the activity itself. Their mode of construction may also impact on their properties.

Types of models

Figure 1 presents four main distinctions between the different types of models to be discussed from the point of view of their adequacy and relevancy for operator training.

Images

FIGURE 1. Different types of models in terms of the structure and the function of the representation and processing system taken into account in modeling.

The first class covers models that engineers use to design systems and those dealing with automatic regulations: These models are mainly computational and obey physical, chemical, or biological laws and are supported by the existing scientific and engineering knowledge of the domain. The second type of models (operator’s models) is concerned with operators’ internal representations about the system they are controlling: They are not directly observable but are inferred from observable behaviors. The third model corresponds to the description by the analyst of these internal representations, which can be defined at different levels of abstraction and different levels of granularity. Finally, the fourth type of model (conceptual models) covers the description of plant functioning which aim at supporting activity or training. In this case we will distinguish models based solely on an epistemological analysis of task (prescriptive models) and those which integrate data from the analysis of activity (operative models). Similar distinctions can also be found in Hollnagel (1988).

There are tight connections among these four types of models. The operator model is partly determined by the models used by engineers for plant design and the external representation implemented in cognitive operative tools. The distance between the two types of models can vary as a function of the nature of the plant or device. For example, in some cases such as robotics, operators may need to know how the model of the automatism functions. Norros stresses in this volume (see chapter 9) the critical role of design models in the development of expertise via the construction of theoretical thinking.

The analyst’s description of the operator model is highly dependent on the task model that analysts are referring to. Conversely, the elaboration of conceptual models is based on the analysis of operators’ activity. Although this chapter focuses on the representational aspects of operators’ knowledge, operational knowledge is nevertheless defined as a knowledge structure that incorporates both declarative (system model), procedural knowledge (procedures for determining how to operate on the system), control knowledge (knowledge which determines how to use declarative and procedural knowledge in problem solving) and meta knowledge (knowledge about self competence and self preferences).

PROCESS ENGINEERING MODELS

The main aim of the models used in process engineering is to design, calculate, and operate industrial plants which carry out desired transformations of matter. Process engineering is a very young science, but it has already a number of invariant concepts. Thus, each process, regardless of what it produces can be analyzed in terms of unitary operations (matter and energy transfer between different solid, fluid, and gas phases with respect to the energy required) and reactors (technological tools leading to a given transformation). Moreover, these design models integrate automatic control and regulation models which ensure optimal functioning of the system in real conditions. In the literature, this kind of model is also known as a designer’s conceptual model (Wilson & Rutherford, 1989).

It is well known that these types of models cannot be used directly to infer the knowledge basis that operators should acquire to be able to control plants efficiently. Nevertheless, the issue of how and at what level the elements elaborated by process engineering models must be taken into account in the development of conceptual models for training design remains open.

OPERATORS’ INTERNAL REPRESENTATIONS AND THEIR DESCRIPTION

The internal representations that operators construct about the plant on which they are acting are not directly observable, but can be inferred from observable behavior. In the past twenty years a great deal of research has been done on operators’ internal representations of plants and generally about different kinds of devices (programmable devices, computers, software, etc.). The descriptions used for modeling operators’ knowledge depends on both the purpose of modeling, the framework considered for task analysis, and the methods used to collect data. Nevertheless, research in this area can yield some generic properties and the functionalities of these representations and their descriptions.

(1) The first dimension concerns the characteristics of operators’ knowledge models. In fact, operators’ knowledge about system functioning can involve concepts, rules, procedures and general physical or chemical laws, and the way to implement them in an operative way in specific situations. These two poles, which should in fact be seen as a continuum, are designated by various authors as “declarative versus procedural,” “static versus dynamic,” “general versus operative,” and mainly express the idea that operators’ mental models are highly structured by the activity, and that declarative knowledge about the process is not necessarily the operational one. This distinction is also used to characterize operators’ expertise: Level of experience is correlated with the degree of operativity of knowledge.

(2) The second dimension concerns the notion of operativity. Operators’ internal representations are not simple mappings of engineering or prescriptive models: They are simpler or more schematic. This simplification can be explained by Ochanine (1978, concept of “operative image”) who considered that operators’ representations of systems are both laconic and functionally deformed by action requirements. The notion of operative image should not only be considered in its figurative aspects, since its form may be propositional as well as schema — or frame — like. Today the terms of “operative representation” or “operative knowledge” are used to designate this kind of knowledge specific to a task or activity.

Ochanine contrasts operative image to cognitive image, which is defined as theoretical and exhaustive knowledge about the system. In some cases the cognitive image has been confused with engineers’ knowledge and not believed to be necessarily operational to control the system. Currently there is not enough data to confirm this hypothesis, but there is sufficient information to believe that operators’ operational representations are not always sufficient or adequate to solve all classes of problems and some aspects of engineers’ knowledge are useful in treating complex problems.

(3) The representations that operators construct about the process are specific to the activity: The control operator’s model of a given plant is different from the ones used by maintenance operators although they work in the same plant. This knowledge can be described by taking two components into account (Bainbridge, 1988):

- knowledge about the functioning of the process

- knowledge about goals and actions on the process

Even though these two components can be described separately for purposes of analysis, in the operator’s mind they should be tightly connected. In static environments such as the use of a text editor or calculator, users’ representations are generally structured in terms of action goals. The transition from this representation to knowledge about the functioning of devices occurs most often when action fails. In dynamic environments knowledge about goals and actions on the process is not sufficient to maintain the system at the desired equilibrium. Moreover, the action-feedback relationship can not be constructed by observing the results of actions, when these integrate both the operator’s own transformations and process transformations themselves.

These representations are constructed both by training and by action and experience, and they do not cover the functioning of the process as a whole. Rather they are restricted to that part of the process on which the operator can act directly and to a class of situations for which the operating rules are well known.

(4)Process characteristics partly determine the nature of the representations which need to be constructed by operators. Hoc (1989) defined certain dimensions affecting operators’ representations of a process:

- proximity of control: Operators cannot act directly on some variables they have to control; they have to identify those on which they can act to obtain an indirect effect on these control variables.

- information accessibility: Information about some process variables cannot be obtained directly; they should be inferred from directly observable information.

- feedback delay: Effects of some actions may be very long (several hours).

- continuity of variables and process transformations: These are not easy to decompose into a succession of discrete states.

These properties have major effects on the content and the form of operators’ mental representations. For instance, it has been shown for a processes such as the blast furnace that operators’ knowledge represents observable process variables and some conceptual entities (descriptors of phenomena) which correspond to schematic representations that are operational for control activity (Hoc & Samurçay, 1992). These conceptual constructs enable the operator to understand the current state of the system, to generate possible further evaluations, and to decide on appropriate actions.

The presence in operator’s representations of this kind of constructed variable (in contrast to observable variables which are measurable) has been shown to exist also in other process control tasks (Pastré, 1992) and in fire fighting (Rogalski & Samurçay, 1993). These variables, which are also termed “pragmatic concepts,” play an important role both in the reduction of complexity and structuration of operative knowledge.

(5) For a given system, different representations with different levels of abstraction and natures can coexist in the operator’s mind. According to Rasmussen’s (1986) definition of abstraction levels, operators’ knowledge about the process is organized in terms of a two-fold hierarchy: part-whole and mean-ends. This description serves to link decision making strategies to the knowledge type (structural, functional) and to the abstraction level (for instance, general functions or physical form) required for the implementation of these strategies. At the lowest level there is very detailed knowledge about the physical form of a system. Moving upwards the system can be represented as subsystems formed by collections of physical components. The generic function level serves to view the subsystems in terms of their functionnalities. The next level is the level of abstract function such as energy flows. Finally the system can be represented at the level of its overall goal and purpose. Here the main idea is the relationship between the abstraction level and the nature of the task: The system should be viewed at different appropriate levels when performing different tasks.

(6) The form and the content of operators’ mental models can be marked by the properties of external representations, such as displays, graphics, tables, language, and so forth. Even though there are very few studies on this point, the form in which system information is presented (for instance, table of values vs. curves) is likely to have an effect (positive or negative) on the way this information is encoded in memory and used in problem solving. Payne (1992) argues for the development of research which examines the sensitivity of content and use of mental models to variations in the representational properties of cognitive artifacts.

PRESCRIPTIVE MODELS

Prescriptive models aim at describing the functioning of a processes in a way that make its comprehension easier for the operator. These descriptions are used mainly for the design of control support systems and for operator training. They are primarily based on epistemological process analysis and assumptions about general human reasoning mechanisms.

In the past fifteen years, one of the challenges to the A1 community has been to build models based on “human reasoning” about physical systems. This approach, called “qualitative physics,” is aimed at developing systems which can reason about the physical world as well as engineers or scientists can. As stressed by Weld and de Kleer (1990) some important tasks that could drive qualitative physics are diagnosing, designing, operating, and explaining (for instruction) physical systems. According to Forbus (1990) the goal of qualitative physics (QP) is to capture both the common-sense knowledge of the person on the street and the tacit knowledge underlying the quantitative knowledge used by engineers and scientists.

The key idea to qualitative simulation is to find a way to represent the continuous properties of the world by a discrete set of symbols. This is based on the evidence that human operators hardly ever use numerical values of the variables in their reasoning processes: They mainly use qualitative values such as “too high,” “too low,” and so on. In other words, their decision rules are not only based on the specific values of parameters but also on the classes of values which are interpreted by the operators in a more general context. In this frame the qualitative values are represented by bounded intervals, and their temporal evolution by logical expressions which can take on three values: increase, decrease, stable. Algorithms serve to calculate the possible states of the system by transition rules as a function of the knowledge of the current states.

Some of the potential applications are intelligent tutoring systems and engineering design. Envisionment as an explicit representation of all different possible behaviors of the system is used as a technique to simulate process behavior. Causality plays a major role in qualitative physics; it is the bridge that links reasoning about structure and function. A cause-effect diagram is one of the possible ways to make the device behavior explicit.

Qualitative physics approaches have generated numerous studies describing the functioning of simple physical devices (Weld & de Kleer, 1990) based on some assumptions about human reasoning such as structural abstraction, simplifying assumptions, and operating assumption (for the normal mode alone).

The qualitative simulation approach has been criticized for several reasons:

-it is generally poor at dealing with large complex systems such as real plants and in particular with systems containing feedback: The existing examples only deal with parts of simple physical processes;

-the difficulty involved in dealing with order of magnitude of variables and taking temporal reasoning into account;

-difficulties in distinguishing structure and function: In fact qualitative simulation does not account for the different points of view and levels that can be used to view the system and thus neglects the functional relationships between control task and action;

-the human reasoning models underlying the device models are weak: They only include successful human reasoning and well known human errors and do not consider the variability of available human strategies.

Points of view on the system

Human ability to reason about physical systems or devices depends on the types of knowledge used and on the way this knowledge is organized. There are three main assumptions:

-knowledge about the process has multiple sources and corresponds to the different types of models

-each piece of knowledge can support one or more specific reasoning paradigms

-depending on the situation, human operators can shift from one type of model to another.

Brajnik et al. (1989) suggest analyzing knowledge relevant to a physical system along three dimensions: epistemological type, aggregation level, and physical view. These dimensions can be found under different terms in other authors’ classifications for different kinds of tasks such as electronic troubleshooting (White & Frederiksen, 1990) and process control task (Goodstein & Rasmussen, 1988).

The first dimension concerns the type of objects and relations included in the descriptions of the system (such structural, behavioral, functional, teleological, empirical descriptions, and so on). Descriptions are assumed to exhibit partial overlap and each epistemological model may be more or less appropriate for some specific task. The structural representation describes what components constitute the system and how they are connected to each other. The behavioral representation describes how components work and interact, in particular the time evolution of crucial system variables according to physical laws. The functional representation is devoted to describing the specific functions associated with the components of the system; it may depart in varying degrees from behavioral and functional description. The teleological representation describes the purpose of the device, its subcomponents, and the operational conditions that enable the achievement of these goals through a correct use of the system. The empirical representation covers the associations between system properties that humans usually acquire through direct operation on the system. This knowledge can express empirical laws which to date have no explanatory scientific theories or laws.

The aggregation level represents the degree of granularity of knowledge. The concept of granularity has also been used by Goodstein and Rasmussen (1988) in their concept of abstraction levels and is defined as the level of decomposition of the problem space. Depending on the nature of the task, the various components of the system should not only be considered at different levels of abstraction, but these different levels should also be coordinated to guarantee an appropriate action. For instance, to achieve a particular task the system should be conceived of at the level of generic function (heating or temperature control) and simultaneously at the level of physical form (temperature switches) by passing through the level of physical function (heating circuits). Moray (1990) proposed the lattice formalism to represent the organization of the operator of a complex system in terms of these levels of abstraction. In this formalism a lattice can be defined for each level of abstraction and description. Operator errors are interpreted as arising when nodes of the mental lattices are not connected in the same way as the physical system lattice.

The physical view expresses the idea of the existence of different points of view when considering plant functioning. Views are ways of looking at the plant from a given perspective using an appropriate filter. For instance, in a complex production system such as a blast furnace, a given phenomenon may be analyzed from both thermodynamic and chemical points of view, and this point of view can change both the description of transformations and system decomposition. Of course these views do not always exist for a given device. For instance, the comprehension of a simple electrical circuit does not require multiple physical views, as has been shown in White and Frederiksen (1990).

CONCEPTUAL MODELS

Conceptual models, as stressed before, attempt to integrate both the specificities of process and the organization of operator knowledge about this particular process in prescriptive model design. The notion of conceptual model is related to the notion of “operational knowledge” developed by Rogalski in this volume (see chapter 8). The notion of “operational” refers to invariants in efficient strategies. Thus, analyzing operational knowledge consists in describing efficient operator knowledge implemented in professional practice: It amounts to identifying what categories of objects and procedures are common to efficient practices, even when these practices are specific to the specific situations. This analysis is equivalent to epistemological analysis in the domain of the didactics of sciences, which aims at identifying classes of problems for which a given concept constitutes an appropriate answer.

This approach has been used to describe operational knowledge in different domains such as air traffic control (Bisseret & Enard, 1970), fire fighting (Rogalski & Samurçay, 1993), blast furnace supervision (Hoc & Samurçay, 1992), and automated plastic production control (Pastré, 1992). These different analyses have shown that all the entities manipulated by operators mentally do not entirely match the ones identified in the prescriptive models; the representation and processing systems used by efficient operators involve some operational entities. What was designated above by constructed variables are typical examples of this kind. For instance in a blast furnace supervision task, it has been shown that operators use such variables (descriptors) to designate and evaluate observable process phenomena indirectly. Air traffic controllers’ mental maps contain elements which are different from those appearing on a geographic map. For fire fighting commanders the most relevant variables are “tactical variables” which are not the roads on the geographical map, but rather the access roads, defined by their relative positions in space with respect to fire evolution. Pastré (1992) has shown also that the constructed variables are useful in defining incidents such as feed jamming that operators of a plastic injection machine use to diagnose the operation mode of the system.

The second point is related to the adequacy of epistemological types and the part-whole decomposition for all types of processes. For instance, a blast furnace is a process which is not easily decomposable in terms of well identified sub components with corresponding functions (the physico-chemical transformations all occur in a single large tank). Thus, operators use a phenomenological decomposition which promotes to define the equilibrium states to maintain. Decomposition, instead of being given by the structural and functional properties of the system, is governed by the properties of transformations applying to actions.

The third point relates to ways in which actions drive relationships. In operator representations, system variables vary in degree of importance, and do not necessarily introduce causal relations, as can be defined in a process model. Prescriptive models are ill equipped to represent the delay and magnitude of transformations (operators’ actions and process transformations) or the way operators represent continuous variables. This is probably due to the extreme difficulties researchers have in accessing this kind of information that operators can not easily make explicit.

Operational models can be classified into two categories with respect to the nature of knowledge they model: representation oriented and strategy oriented. The first type of model mainly describes operational entities and relations (causal, functional, transformational, etc.) among these entities. The second type mainly describes the organization of information processing which should be implemented when searching for a solution in a particular task domain. The problem solving methods belong to this latter category: They guide and organize subjects’ problem solving strategies. This distinction is not really an opposition, but rather highlights the main characteristics of the models. For example, the phenomenological model designed by Hoc & Samurçay (1992) for blast furnace supervision task is more representation oriented in that it describes the entire process functioning around the phenomena which are related to each other mainly by causal relationships and for which four types of entities (causes, consequences, indicators, and possible actions) were defined. Conversely, the goal-oriented model developed by Patrick (1989; 1993) for fault finding in an automated hot strip mill is mostly strategy oriented: It defines three stepwise goals (initial symptom identification, global fault set reduction, and searching within a subsystem) to guide operators’ search by prescribing more tractable goals and sub goals. Similarly, the method for tactical reasoning designed for emergency management tasks is a strategy oriented model (Samurçay & Rogalski, 1991): it describes the invariant strategies implemented by efficient operators in the form of a control loop composed of different phases of the decision task (information gathering management, planning, decision and control).

The purpose here is not to contrast prescriptive models to conceptual models, but to argue that the functional deformations of the operators’ model should be taken into account in the design of conceptual models for training. In this case, engineers’ knowledge can be used to improve and complete operative representations. In this way it is possible to establish a dialectic relationship between different approaches.

CONCEPTUAL MODELS AND TRAINING

Needs for operator training

Why do operators need to acquire operational models of the processes they control? To reach optimal productivity within safety and reliability norms not only requires technical solutions but stable and efficient operator performance as well. Most technological changes require operators to develop new systems of representation. In well known, familiar situations operators handle problems by working with skill based and/or rule based knowledge by using a set of mental actions and shortcuts which enable them to handle the problem without needing to work at the knowledge base level. The problems encountered by operators when dealing with new and unforeseen problematic situations are not completely defined, particularly in the most complex dynamic environment management situations.

On the other hand, there is numerous evidence that internal representations spontaneously built up by people about the process or the device in action situations are not always the optimal ones although they are sufficient to solve a limited class of problems. For instance, one of the main dimensions of expertise in blast furnace supervision tasks was found to be the richness of the operational knowledge activated during diagnosis activity (Hoc, 1990). In fact, more expert operators were found to use a wider frame for analysis of the situation by considering more constructed variables and by elaborating more operational hypotheses on these variables. This justified the elaboration of a conceptual model for the training of less expert operators (Hoc & Samurçay, 1992).

The main problem today in training is to design operational models for experienced operators who already have strongly constructed rule-based knowledge. There is a need to link system knowledge to decision making strategies and the class of situations. Conceptual model based training can be seen as a transition from previously internalized models to more generative, where each handles a given class of problems.

Can operators be trained to use conceptual models?

The HCI literature is replete with information on the training of novices in the use of mental models for running a computer or, more generally, a device. The major conclusion which can be drawn from this literature is that users develop useful knowledge about the device’s behavior from instructions. However, as stressed by Bibby and Payne (1992), the form and content of representation structures that people internalize are closely related to the instructional materials that users are given, and the process of internalization of external representations depends highly on the informational and computational nature of these external representations, that is, the capacity of the model to make relational computations possible.

A considerable amount of work has also been done on the area of fault-finding training (for a recent overview of training approaches, see Patrick, 1993). The main outcome of these studies is that training with a qualitative model of plant functioning improves diagnosis of both familiar and novel faults. The results obtained by Patrick (1993) on the training of hot strip steel mill operators are promising. Both apprentice and experienced operators who participated in model based training improved speed and demonstrated better identification and interpretation of the symptoms as a result of training. However, even though there is some evidence that model based training improves task performance in different domains, this does not imply that all model based training material and situations are useful. The content and the form of the knowledge represented in the model has to be both valid and capable of being translated into operational knowledge.

In contrast to the processing of static situations, in dynamic environment management, simulation of physical systems (in order to make behavior explicit through envisionment techniques) is considered to be the best way to train people to understand them. This approach however has two shortcomings. First, with the exception of simple or highly deterministic systems, the behavior of most systems such as a blast furnace or fire propagation is not completely modeled, making it impossible to simulate the functioning of the entire system. Second, adaptation to the simulator does not necessarily produce transferable knowledge to unforeseen problem situations and does not allow the operators to modify their points of view on their practice. Shifting from actions to conceptualization is not an easy task and calls for a change in perspective on the situation. Even in very simple dynamic systems such as the paradigms used by Broadbent, the initial training of novice subjects, on the simulator, to system controlling shows that the knowledge acquired by the subjects remained very specific to the situations encountered during the interaction (contextualized), had local validity, and did not generalize to control of the whole system (Marescaux & Luc, 1991). However the authors conclude that debriefing after simulation could contribute to the construction of more generalizable knowledge.

Which training methods and material should be used?

Moving from the simple things to the complex ones is a very common idea in training design. Because of its complexity most training designs in dynamic environment management tasks are based on knowledge decomposition. In many training programs, plant knowledge is divided into smaller sub-parts which are supposed to be easier to understand.

Bainbridge (1990) suggests that there are at least three principles of dividing a complex process into smaller sub-parts:

-dividing the process into unit operations, sub-parts of the plant where crossflows are relatively simple;

-grouping parts of the plant devolved to the same function;

-making divisions based on the task rather than on the plant.

Even if this kind of decomposition can in some cases be useful for the initial stages of training, its efficiency for professional training is debatable for two reasons. First, this cumulative approach to knowledge construction does not lead to the necessary coordination of the parts in real task settings. Second, the use of some concepts and strategies are only warranted in a certain level of complexity.

An alternative method consists of viewing the training and learning process as a restructuring process. In this perspective the organization of the learning is seen in a different way: Instead of learning each component separately, the learners encounter all the components at an initial level of complexity before encountering the same components in a more complex context. The interactions among elements are always present, but in progressively more complex contexts. This method was used by Bisseret and Enard (1970) to design training for air-traffic controllers. In each conceptual model task unit a system of representation and processing was defined (e.g., categorize planes with respect to their estimated landing time and location). In the training situation, controllers worked on complex real work situations in which their task was to deal with the restricted problem. The interactions of these subtasks with the other aspects of the general task were simulated by the instructors.

An analogous approach was used by White and Frederiksen (1990) to teach how simple electrical circuits work to college students. Again, instead of constructing a training method based on system decomposition, the authors devised an instructional system based on a progression of the conceptual models of electrical circuit behavior. The model progression was used to create relatively complex problem sets in each stage of learning, so as to introduce successive refinements in students’ mental models. The progression of the model was based on changes in perspective (as described previously) of the circuit representation (e.g., functional, behavioral, reductionistic) combined with orders of the model: zero order (when reasoning deals with the binary states of the device) or first-order model (when reasoning must deal with evolutions).

An important issue in the literature is whether theoretical understanding of system functioning can support effective diagnosis or supervision activities in a complex industrial context. The idea of “theoretical” training has been criticized by various authors along two lines:

(i) Knowledge acquired in this way is hard to proceduralize immediately (in fact many experimental results comparing declarative versus procedural training have drawn this conclusion). Duff (1992) compared theoretical versus operative knowledge based training and concluded that even though the second type of learning produces more accurate and rapid performance during initial learning, the accuracy drops in novel problem situations: Performance is enhanced by theoretical learning although it is slow during initial learning.

(ii) After this type of training, declarative knowledge declines while skills increase with field experience. Conflicting data however have been published and alternative interpretations of the same evidence can be developed by raising the issue of the meaning of “theoretical”: Efficient theoretical knowledge is knowledge which turns into efficient operational knowledge that enables the operator to anticipate and control actions in situations even when the formal form is forgotten. Theoretical knowledge can also be seen as a type of external representation of operational knowledge. Its content and form is tightly related to the nature and characteristics of the task.

Role of the external representations in the acquisition of the operational models

The acquisition of new knowledge or the reorganization of old knowledge and in fact most of our mental and work activities are mediated by the use of external representations. The object of mediatization is also known as “cognitive artifacts” (Payne, 1992), “operative cognitive tools” (Rogalski, see this volume chapter 8) or “instruments” (Rabardel, 1991). The main function of an external representation is to support cognitive activities as is the case for most aid systems such as displays, diagrams, graphics, automatisms, and so on.

The main hypothesis is that there is a strong interaction between the representation that develops during the learning process and the internal representations manipulated when performing a task. When external representations can support activity and their use is integrated in the activity, they can produce new knowledge about the activity itself. A study by Bibby and Payne (1993) on the internalization of device knowledge is based on this assumption. Their learning study clearly shows that different external representations (picture, connection diagram, procedures, list of conditions) of device knowledge support different kinds of inference to a greater or lesser extent and thus enable a range of various forms of performance in different tasks (faultfinding, switch setting, operating, etc.). Subjects conserve and continue to use some features of their initial instructional device description even after extended practice in solving problems on the device.

On the other hand, in many cases the appropriation of external representations such as instruments may require specific knowledge acquisition. Rogalski and Samurçay (1993) have shown that in two dynamic environment management tasks such as fire fighting and blast furnace supervision, non specific training and a lack of the specific knowledge such as basic operations (adding, comparing, transforming of curves) on the graphic representation of the linear functions is an obstacle to using them as tools in the main activity. In contrast, learning something about the tool itself can facilitate the construction of new knowledge about the activity.

The conceptual model we built up for blast furnace supervision aims at a dialectic construction of these two kinds of knowledge (Hoc & Samurçay, 1992). This model decomposes system operations as a whole into elementary phenomena and serves to analyze a given process situation from any one standpoint such as thermodynamics, chemical equilibrium, and so on. The model is hybrid from two points of view: It is based on the operators’ representation of the process, and on engineers’ “scientific” knowledge. It refers simultaneously to causal, topological, functional, empirical, and operational knowledge (repertory of actions on the process). The conceptual model has been tested through analysis of existing data: It is sufficiently viable in terms of hypothesis generation in supervision activity. It takes into account the hybrid nature of knowledge representation of the process and enables links with actions. Descriptors of phenomena are promising tools for displaying information, particularly when the aim is to improve the reliability and exhaustiveness of hypothesis management. General operator consensus on their semantics could provide an operational language for communicating with computer support, as well as with operators on the next shift. A prototype of a support system based on this model has been constructed: The implemented model is not a dynamic simulation of the process (which is impossible given current scientific and engineering knowledge for process modeling) but rather an interface which mainly structures information gathering and management activities (Samurçay & Hoc, 1991). The preliminary findings show that training (solving simulated diagnosis problems) at least improved anticipation of possible evolutions of the process.

TRAINING IN WORK SITUATIONS

What is the optimal way to integrate conceptual models as tools into the development of competences by research and analysis in work situations themselves? In many cases the artificial construction of new problem situations or complex situations which justifies the use of model based knowledge is difficult or impossible.

Baerentsen (1991) has pointed to the importance of extensive use of episodic memory and narrative accounts of incidents in system control. He argues that episodic memory serves as a basis for analogical problem solving in the case of the occurrence of similar events, and as a medium for creating a foundation for shared knowledge among operators as well as general system knowledge. Norros (see this volume chapter 9) argues along similar lines by describing reflexive and cooperative activities such as mechanisms for development of expertise in everyday work situations. In these cases, conceptual models could be used to build specific tools to share and construct a common knowledge base among engineers and operator and among operators themselves.

These tools should support at least three types of activities which are usually implemented after major incidents:

-anticipate possible problem situations,

-analyze previous problem situations,

-analyze previously used knowledge and action rules to handle these situations and construct a posteriori possible knowledge and actions which could be used. These kinds of analyses are usually carried out by technical services alone without any feedback to operators.

Many of the mechanisms involved the development of competence in work situations remain to be discovered.

REFERENCES

Baerentsen, K.B. (1991, September). Knowledge and shared experience. Paper presented at the Third European Conference on Cognitive Science Approaches to Process Control. Cardiff, UK.

Bainbridge, L. (1988). Types of representation. In L.P. Goodstein, H.B. Andersen, & S. Olsen (Eds.), Tasks, errors and mental models (pp. 70–91). London: Taylor & Francis.

Bainbridge, L. (1990). Development of skill, reduction of workload. In L. Bainbridge, & S.A. Ruiz Quintanilla (Eds.), Developing skills with information technology (pp. 87–116). Chichester, UK: Wiley.

Bibby, P.A., & Payne, S.J. (1992). Mental models, instruction and internalization. In Y. Rogers, A. Rutherford, & P.A. Bibby (Eds.), Models in the mind: Theory, perspective & application (pp. 153–172). London: Academic Press.

Bibby, P.A., & Payne, S.J. (1993). Internalization and the use specificity of device knowledge. Human-Computer Interaction, 8, 25–56.

Bisseret, A., & Enard, C. (1970). Le problème de structuration de l’apprentissage d’un travail complexe [Structuring training for complex work. A training method based on continual interaction between programmed units: MICUP]. Bulletin de Psychologie, 284, 11–12, 632–648.

Brajnik, G., Chittaro, L., Guida, G., Tasso, C., & Toppano, E. (1989, September). The use of many diverse models of an artifact in the design of cognitive aids. Paper presented at the Second European Meeting on Cognitive Science Approaches to Process Control. Siena, Italy.

Duff, S.C. (1992). Mental models and multi-record representations. In Y. Rogers, A. Rutherford, & P.A. Bibby (Eds.), Models in the mind: Theory, perspective & application (pp. 173–186). London: Academic Press.

Forbus, K.D. (1990). Qualitative process engine. In D.S. Weld, & J. de Kleer (Eds.), Readings in qualitative reasoning about physical systems (pp. 220–235). San Diego, CA: Morgan Kaufmann Publishers.

Gentner, D., & Stevens, A.L. (Eds.). (1983). Mental models. Hillsdale, NJ: Lawrence Erlbaum Associates.

Goodstein, L.P., Andersen, H.B., & Olsen, S.E. (Eds.). (1988). Tasks, errors and mental models. London: Taylor & Francis.

Goodstein, L.P., & Rasmussen, J. (1988). Representation of process state, structure and control. Le Travail Humain, 51, 19–37.

Hoc, J.M. (1989). Cognitive approaches to process control. In G. Tiberghien (Ed.), Advances in cognitive science, Vol 2: Theory and applications (pp. 178–202). Chichester, UK: Horwood.

Hoc, J.M. (1990, September). Operator expertise and task complexity in diagnosing a process with long time lags: blast furnace supervision. Paper presented at the Third European Conference on Cognitive Science Approaches to Process Control. Cardiff, UK.

Hoc, J.M., & Samurçay, R. (1992). An ergonomic approach to knowledge representation. Reliability Engineering and System Safety, 36, 217–230.

Hollnagel, E. (1988). Mental models and model mentality. In L.P. Goodstein, H.B. Andersen, & S. Olsen (Eds.), Tasks, errors and mental models (pp. 261–268). London: Taylor & Francis.

Marescaux, P.J., & Luc, F. (1991). An evaluation of the knowledge acquired at the control of a dynamic simulated situation through a static situation questionnaire and a “teaching back” debriefing. In F. Daniellou, & Y. Quéinnec (Eds.), Designing for everyone (pp. 1673–1675). London: Taylor & Francis.

Moray, N. (1990). A lattice theory approach to the structure of mental models. In D.E. Broadbent, A. Baddley, & J.T. Reason (Eds.), Human factors in hazardous situations (pp. 129–135). Oxford, UK: Clarendon Press.

Ochanine, D. (1978). Le rôle des images opératives dans la régulation des activités de travail [The role of operative images in the regulation of work activities]. Psychologie et Education, 2, 63–72.

Pastré, P. (1992). Requalification des ouvriers spécialisés et didactique professionnelle [Professional qualification of control operators and occupational education]. Education Permanente, 111, 33–54.

Patrick, J. (1989, September). Representation and training of fault-finding. Paper presented at the Second Conference on Cognitive Science Approaches to Process Control. Siena, Italy.

Patrick, J. (1993, September). Training fault-finding skills in complex industrial contexts. Paper presented at the Fourth European Conferences on Cognitive Science Approaches to Process Control. Hilerød, Denmark.

Payne, S.J. (1992). On the mental models and cognitive artifacts. In Y. Rogers, A. Rutherford, & P.A. Bibby (Eds.), Models in the mind: Theory, perspective & application (pp. 103–118). London: Academic Press.

Rabardel, P. (1991). Activity with a training robot and the formation of knowledge. Journal of artificial intelligence in education, 3–14.

Rasmussen, J. (1986). Information processing and human-machine interaction. Amsterdam: North-Holland.

Rogalski J., & Samurçay R. (1993). Représentations: Outils cognitifs pour le contrôle d’environnements dynamiques [Representations: Cognitive tools for dynamic environment management]. In A. Weill-Fassina, P. Rabardel, & D. Dubois (Eds.), Représentations pour l’action [Representations for acting] (pp. 183–207). Toulouse: Octarès.

Rogers, Y., Rutherford, A., & Bibby, P.A. (Eds.). (1992). Models in the mind: Theory, perspective & application. London: Academic Press.

Samurçay, R. & Hoc, J.M. (1991). Modelling operator knowledge and strategies for the design of computer support to process control. In F. Daniellou, & Y. Quéinnec (Eds.), Designing for everyone (pp. 823–826). London: Taylor & Francis.

Samurçay, R., & Rogalski, J. (1991). A method for tactical reasoning (MTR) in emergency management: Analysis of individual acquisition and collective implementation. In J. Rasmussen, B. Brehmer, & J. Leplat (Eds.), Distributed decision making: Cognitive models for cooperative work (pp. 291–301). Chichester, UK: Wiley.

Weld D.S., & de Kleer, J. (Eds.). (1990). Readings in qualitative reasoning about physical systems. San Diego, CA: Morgan Kaufmann Publishers.

White, B.Y., & Frederiksen, J.R. (1990). Causal model progression as a foundation for intelligent learning environments. Artificial Intelligence, 42, 99–157.

Wilson, J.R., & Rutherford, A. (1989). Mental models: theory and application in human factors, Human Factors, 31, 617–634.

Woods, D.D. (1988). Coping with complexity: The psychology of human behavior in complex systems. In L.P. Goodstein, H.B. Andersen & S. Olsen (Eds.), Tasks, errors and mental models (pp. 128–148). London: Taylor & Francis.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.185.196