192 Joint Cognitive Systems
presently fly according to directions from air traffic control rather than as
they please. (This is, however, likely to change in the near future, but that is
another story.) In process control proper, the view of the past is often
institutionalised by necessity, because the process goes on over long periods
of time, often months or years. Knowing what has happened in the past, on
the previous shift or since start-up, can therefore be crucial. The view of the
present is, of course, supported by control room design in itself, while the
view of the future is given by procedures and instructions, and in some cases
also by specific computerised support and displays.
Adaptation
While discussing the issues in the design of control rooms, even if some of
them are virtual as in the above, it is nearly unavoidable to include the issue
of adaptation, defined as the ability to make appropriate responses to changed
or changing circumstances. Adaptation, and especially the concept of an
adaptive interface or adaptive information presentation, has for many years
been seen as the solution to many of the problems that have plagued process
control. Indeed, the answer to the questions of presenting the right
information, in the right form, at the right time, has often been taken to be
adaptation. This would at least solve the problem of presenting the
information in the right form. The argument for doing this is, however,
slightly misleading. The mistake is that the ability post hoc to reason what the
situation ought to have looked like is different from the ability to determine it
as and when it happens. The belief in adaptation as a panacea nevertheless
dies hard, and is still being promoted as the solution to the ever increasing
volume of data in which we are immersed. A recent example is that the
solution to the problem of too many information sources in a car, where
drivers notably have to focus primarily on the driving, is to develop an
adaptive interface. In human-machine interaction adaptation is typically
proposed as one of the following.
Changes to information presentation, which can be applied to contents,
format, or timing. If this works, the obvious advantage is that it may
facilitate signal discrimination, situation assessment and feedback
evaluation. The problem and potential disadvantage are that it is very
difficult to predict needs and conditions ahead of time, as discussed in
Chapter 4.
Response recognition and/or completion, which can be applied to single
responses or to sequences of actions. This is certainly feasible, and a
number of techniques are available, going back to Markov chains. The
disadvantage is that it works best for command-type responses (single
CSE and Its Applications 193
responses), since the uncertainty of complex responses (sequences) can be
formidable.
The view on and the attitude to adaptation reflects the assumptions of the
underlying model of human performance. A procedural prototype model
suggests that adaptation is beneficial because it reduces demands to ‘input
processing’. A contextual control model suggests that adaptation can be
detrimental because it reduces predictability. Since predictability is of
primary importance for JCSs, it stands to reason that adaptation should be
used rather sparingly, if at all. The strength of humans is that they rather
quickly can learn or find out how a system or process functions and use this
to tailor their performance. If that process is degraded because the system
keeps changing, the result may be oscillations or hunting due to insufficient
damping. This is clearly not a desirable state.
Proponents of adaptive interfaces or adaptive interaction often point to the
flexibility and efficiency of human-human communication. It is beyond
dispute that humans are very capable of adapting the way they communicate
to others, and that this in most cases is one of the reasons why
communication is so effective. But the reason why this is possible is that
humans being models of each other have the requisite variety built-in, so
to speak. Communication is very much a case of being in or establishing
control, as pointed out by Shannon & Weaver (1969; org. 1949). And control,
as we know by now, requires a good model of the system to be controlled.
Since artefacts are not good models of users and are unlikely to become
that for the foreseeable future it follows that adaptation is doomed to fail as
a panacea. This does not preclude that it may work under very specific
circumstances, for instance well-understood and narrowly defined work
contexts. Attempts to use it outside of such circumstances are nevertheless
unlikely to succeed and may, at best, unwittingly impose the forced
automaton metaphor.
DECISION SUPPORT
Another major field of application for CSE is decision making and decision
support. One of the first books that clearly bore the mark of CSE was, indeed,
about intelligent decision support systems (Hollnagel, Mancini & Woods,
1986). Neither the interest for, nor the importance of, this topic have waned
in the years since then. On the contrary, the veritable explosion in computing
and communication technologies has made the problem of decision making
all the more potent.
Decision making has traditionally been about selecting among
alternatives, i.e., choosing what to do. This tradition can be traced back to the
194 Joint Cognitive Systems
roots of decision theory and the philosophical aspects of knowing what is
correct and what is incorrect. The main body of decision research has adopted
a rather formal or normative view (Edwards, 1954) and striven to find ways
of reconciling the obvious inability of humans to behave as rational decision
makers with the requirements of the theories (Gilovic et al., 2002). Despite
two small revolutions – the principle of approximate decisions from the
theory of satisficing (Simon, 1955) and the school of naturalistic decision
making (Klein et al., 1993) decision making is still very much seen as a
question of making the right decision, hence of obtaining and processing
information. The view that decision making is a distinct process that can and
should be supported is a misunderstanding that has been inherited from
normative decision theory and reinforced by the school of human information
processing. That view should be compared to the more descriptive or
naturalistic approaches where decision making is seen as sense making, and
where therefore one should support making sense rather than making
decisions. This can also be formulated as the question of whether decision
making is something that occurs at a separate point in time or whether it is
part of a continuous control process. In the latter case it becomes more
important to support, e.g., monitoring, detection and recovery rather than to
support decision making as such.
In a CSE perspective, decision making is not so much about what to do
but about how and when it should be one. Decision-making is thus typically
concerned with when to do something, about magnitude or how much should
be done (level of force, amount of resources spent), about modes and means
of implementation, etc. In other words, decision-making more often deals
with the ways to carry out an alternative than the choice of the alternative as
such. This is particularly so for situations where the alternatives are obvious
and given in advance, often as binary choices, but where there can be many
ways of implementing a specific alternative. A simple example is fire
fighting: here you can either decide to fight the fire or to let it burn. But the
decision to fight the fire requires further decisions about how to do it, how
many resources to put in, which strategy to apply, when to do what, etc.
As a consequence of this, the nature of decision support changes and must
be considered anew. First of all it cannot simply be an issue of automation,
since decisions cannot be automated without ceasing being decisions.
Automation works fine for routine tasks, where most if not all conditions
can be anticipated. Automation requires that the environment is highly
regular, i.e., that there is only a limited set of possible conditions, and that
these can be identified with high reliability. But if the environment is highly
regular then it is possible to plan in advance what to do. There is therefore no
need to make decisions and consequently no need of a decision support
system. Conversely, if the environment is irregular and unpredictable, then it
is impossible to introduce effective automation. Decisions are required when
CSE and Its Applications 195
the conditions are irregular, and when it is not known in advance how the
system should respond. Decision support should therefore not be seen as
automation in the usual meaning of the term, because it is not feasible to
automate something that is not highly regular and reliable. Decision support
does not automate what humans do, in the sense of taking over – as in
prosthetic support. Decision support rather provides functions that humans
cannot accomplish – or at least cannot accomplish well – and by virtue of that
amplifies or augments human decision-making as an overall function.
Decision support must also be continuous rather than discrete, and closely
integrated with the task. It means that decision support no longer can be
treated as a separate issue, just as the decision no longer is a separate and
identifiable process. In that sense many or all aspects of interface and
interaction design become issues of decision support, but not as decision
support in a separate capacity. In considering the difference between
intelligent decisions or intelligent support, CSE makes clear that the
intelligence clearly cannot be in the support but must be in the decision
maker, i.e., in the controlling system. We should therefore strive to support
intelligent decisions and the intelligent implementation of choices made. The
implementation can be described as an issue of remaining in control of the
situation, which essentially means avoiding unpredictable developments and
events. In that sense the implementation issues (e.g., how and when rather
than what) become issues of maintaining control in the face of difficulties.
The design of decision support may therefore be approached from an analysis
of the control issues, specifically how control can be lost and how it can be
maintained and regained.
THE LAST WORDS
This book started by characterising the driving forces that have shaped CSE.
These were computerisation and the growing complexity of systems that
resulted from that, the increased conspicuousness of the human factor in the
functioning and malfunctioning of these systems, and finally the effects
of the dominant scientific paradigm, which promoted the view of humans as
information processing systems. Of the three driving forces the first remains
with us and is probably beyond control, even for the most self-assured
Luddite. The second is less unalterable, since it very much depends on the
perspective taken. That, in turn, is partly a consequence of the third driving
force, which definitely is something that can be changed.
It is undeniable that thinking of humans and of systems in general in
terms of information processing and transmission has been of immense value
as a way of coping with the complexity of the real world. This is so both for
the behavioural sciences, including human factors, for economics, for
196 Joint Cognitive Systems
sociology, for genetics, and probably for many other sciences as well. Yet for
the behavioural sciences the analogy has been partly devastating as well, both
because it imposed a reduced description or model of humans and because it
introduced a conceptual separation between humans and machines. As far as
the former goes, this resulted in an unfortunate separation of cognition, or
‘cold cognition’ (Abelson, 1963) from other facets of human behaviour,
particularly those that have to do with affect and emotion (‘hot cognition’).
One consequence of that is a misguided attempt to re-establish the balance by
focusing on the emotional aspects, as if emotion and cognition were
opposites. As far as the latter goes, this resulted in an uncritical acceptance of
linear decomposition as the primary analytical principle. Although
decomposition is necessary for analysis, it is crucial to keep in mind that the
components found in this way are artefacts of the underlying system
description, and that they are meaningful only in relation to the whole form
which the analysis started.
The view espoused by CSE is based on the general systems perspective,
which – as a paradigm – precedes information processing. This view is
cogently represented by cybernetics, but the roots go back to Ludwig von
Bertalanffy’s General Systems Theory, and even further back to the 19th-
century English philosopher of science George Henry Lewes, who suggested
the important distinction between resultant and emergent phenomena.
CSE can in a shorthand version be said to deal with how joint cognitive
systems use artefacts to cope with complexity. The emphasis of this
formulation is on the dynamic aspects, on what the JCS does rather than on
how it does it. The descriptions that have been developed by CSE, the
concepts and the methods, have a pragmatic purpose, and their possible
success must be seen by how well they achieve that purpose, i.e., how well
they enable us to engineer cognitive systems. Although this book mostly has
referred to human-machine systems, to operators working with technology,
CSE is about joint cognitive systems in general, and not about humans only.
This is another reason why there is little emphasis on the possible
‘mechanisms of the mind since the ‘mind’ of an organisation naturally is
different from the ‘mind’ of an individual.
The first three chapters of this book have described the background of
CSE as well as the basic conceptual and methodological constituents. The
following three chapters went into the details of the three main themes
coping with complexity, use of artefacts, and joint cognitive systems.
Chapters seven and eight presented the basic principles for modelling and
understanding control, and for how the temporal dynamics could be
described. The final chapter outlined how CSE can be applied, but for
reasons of space this was necessarily kept short. A more extensive
description of the application of CSE must go into details of how to observe
and study JCSs in the field, how to derive the basic findings or control ‘laws
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.29.219