6 Joint Cognitive Systems
The net result of these developments is a positive feedback loop, which
means that deviations will tend to grow larger and larger resulting in more
serious events and accidents (Maruyama, 1963). Although this interpretation
may be overly pessimistic, the fact remains that we seem to be caught in a
vicious circle that drives the development towards increasingly complex
systems. One of the more eloquent warnings against this development was
given in Charles Perrow’s aptly named book Normal Accidents (Perrow,
1984), in which he argued that systems had become so complex, that
accidents were the norm rather than the exception. It is in this sense that the
growing technological complexity is a challenge as well as a motivation for
CSE.
Figure 1.1 is obviously a simplification, which leaves out many nuances
and opportunities for self-correction in the loop. In reality the situation is not
always as bad as Figure 1.1 implies, since most complex systems function
reliably for extended periods of time. Figure 1.1 is nevertheless useful to
illustrate several important points.
Systems and issues are coupled rather than independent. If we disregard
these couplings in the design and analysis of these systems, we do it at
our own peril.
Events and relations must be understood in the context where they occur.
It is always necessary to consider both dependencies to other parts of the
system and to events that went before. This is particularly the case for
human activities, which cannot be understood only as reactions to events.
Control is fundamental in the definition of a cognitive system. Since all
systems exist in environments that to some extent are unpredictable, there
will sooner or later be a situation that was not considered when the system
was designed. It can be difficult enough to keep control of the system
when it is subject only to the ‘normal’ variability of the environment, but
it becomes a serious challenge when unexpected situations occur. In order
for the system to continue to function and maintain its integrity, control is
necessary whether it is accomplished by the system itself or by an
external agent or entity.
The positive feedback loop described above is a useful basis for
understanding changes in human interaction with technology. Chapter 2 will
provide a more thorough treatment of this issue and describe how
technological developments have led to changes in the nature of work. For
the moment we shall simply name three significant consequences of the
growing complexity.
The striving for higher efficiency inevitably brings the system closer to
the limits for safe performance. Concerns for safety loom large and
neither public opinion nor business common sense will accept efficiency
The Driving Forces 7
gains if they lead to significantly higher risks although there sometimes
may be different opinions about when a risk becomes significantly higher.
Larger risks are countered by applying various kinds of automated safety
and warning systems, although these in turn may increase the complexity
of the system hence lead to even greater overall risks. It may well be that
the number of accidents remains constant, but the consequences of an
accident, when it occurs, will be more severe.
A second important issue is the increased dependence on proper system
performance. If one system fails, it may have consequences that go far
beyond the narrow work environment. The increasing coupling and
dependency among systems means that the concerns for human
interaction with technology must be extended from operation to cover
also design, implementation, management, and maintenance. This defines
new demands to the models and methods for describing this interaction,
hence to the science behind it.
A third issue is that the amount of data has increased significantly. The
sheer number of systems has increased and so has the amount of data that
can be got from each system, due to improved measurement technology,
improved transmission capacity, etc. Computers have not only helped us
to produce more data but have also given us more flexibility in storing,
transforming, transmitting and presenting the data. This has by itself
created a need for better ways of describing humans, machines, and how
they can work together. Yet although measurements and data are needed
to control, understand, and predict system behaviour, data in itself is not
sufficient. The belief that more data or information automatically leads to
better decisions is probably one of the most unfortunate mistakes of the
information society.
CONSPICUOUSNESS OF THE HUMAN FACTOR
Over the last 50 years or so the industrialised societies have experienced
serious accidents with unfortunate regularity, leading to a growing realisation
of the importance of the human factor (Reason, 1990). This is most easily
seen in how accidents are explained, i.e., in what the dominant perceived
causes appear to be.
It is commonly accepted that the contribution of human factors to
accidents is between 70% 90% across a variety of domains. As argued
elsewhere (Hollnagel, 1998a), this represents the proportion of cases where
the attributed cause in one way or another is human performance failure. The
attributed cause may, however, be different from the actual cause. The
estimates have furthermore changed significantly over the last 40 years or so,
as illustrated by Figure 1.2. One trend has been a decrease in the number of
accidents attributed to technological failures, partly due to a real increased
8 Joint Cognitive Systems
reliability of technological systems. A second trend has been in increase in
the number of accidents attributed to human performance failures,
specifically to the chimerical ‘human error’. Although this increase to some
extent may be an artefact of the accident models that are being used, it is still
too large to be ignored and probably represents a real change in the nature of
work. During the 1990s a third trend has been a growing number of cases
attributed to organisational factors. This trend represents a recognition of the
distinction between failures at the sharp end and at the blunt end (Reason,
1990; Woods et al., 1994). While failures at the sharp end tend to be
attributed to individuals, failures at the blunt end tend to be attributed to the
organisation as a separate entity. There has, for instance, been much concern
over issues such as safety culture and organisational pathogens, and a number
of significant conceptual and methodological developments have been made
(e.g. Westrum, 1993; Reason, 1997; Rochlin, 1986; Weick, Sutcliffe &
Obstfeld, 1999).
10
20
30
40
50
60
70
80
100
90
1965 1970 1975 1980 1985 1990 1995
% Attributed cause
2000
T
e
c
h
n
o
l
o
g
y
2005
Human
factors
O
r
g
a
n
i
s
a
t
i
o
n
?
?
?
Figure 1.2: Changes to attributed causes of accidents.
The search for human failures, or human performance failure, is a
pervasive characteristic of the common reaction to accidents (Hollnagel,
2004). As noted already by Perrow (1984), the search for human failure is the
normal reaction to accidents:
Formal accident investigations usually start with an assumption that
the operator must have failed, and if this attribution can be made, that
is the end of serious inquiry. (Perrow, 1984, p. 146)
The Driving Forces 9
Since no system has ever built itself, since few systems operate by
themselves, and since no systems maintain themselves, the search for a
human in the path of failure is bound to succeed. If not found directly at the
sharp end as a ‘human error’ or unsafe act it can usually be found a few
steps back. The assumption that humans have failed therefore always
vindicates itself. The search for a human-related cause is reinforced both by
past successes and by the fact that most accident analysis methods put human
failure at the very top of the hierarchy, i.e., as among the first causes to be
investigated.
THE CONSTRAINING PARADIGM
One prerequisite for being able to address the problems of humans and
technological artefacts working together is the possession of a proper
language. The development of a powerful descriptive language is a
fundamental concern for any field of science. Basically, the language entails
a set of categories as well as the rules for using them (the vocabulary, the
syntax, and the semantics). Languages may be highly formalised, as in
mathematics, or more pragmatic, as in sociology. In the case of humans and
machines, i.e., joint cognitive systems, we must have categories that enable
us to describe the functional characteristics of such systems and rules that tell
us how to use those categories correctly. The strength of a scientific language
comes from the concepts that are used and in precision of the interpretation
(or in other words, in the lack of ambiguity). The third driving force of CSE
was the need of a language to describe human-technology coagency that met
three important criteria:
It must describe important or salient functional characteristics of joint
human-machine systems, over and above what can be provided by the
technical descriptions.
It must be applicable for specific purposes such as analysis, design, and
evaluation – but not necessarily explanation and theory building.
It must allow a practically unambiguous use within a group of people, i.e.,
the scientists and practitioners who work broadly with joint human-
machine systems,
In trying to describe the functioning and structure of something we cannot
see, the mind, we obviously use the functioning or structure of something we
can see, i.e., the physical world and specifically machines. The language of
mechanical artefacts had slowly evolved in fields such as physics,
engineering, and mechanics and had provided the basis for practically every
description of human faculties throughout the ages. The widespread use of
models borrowed from other sciences is, of course, not peculiar to
10 Joint Cognitive Systems
psychology but rather a trait common to all developing sciences. Whenever a
person seeks to understand something new, help is always taken in that which
is already known in the form of metaphors or analogies (cf., Mihram, 1972).
Input-Output Models
The most important, and most pervasive, paradigm used to study and explain
human behaviour is the S-O-R framework, which aims to describe how an
organism responds to a stimulus. (The three letters stand for Stimulus,
Organism, and Response.) The human condition is one of almost constant
exposure to a bewildering pattern of stimuli, to which we respond in various
ways. This may happen on the level of reflexes, such as the Patella reflex or
the response of the parasympathetic nervous system to a sudden threat. It
may happen in more sophisticated ways as when we respond to a telephone
call or hear our name in a conversation (Cherry, 1953; Moray, 1959;
Norman, 1976). And it happens as we try to keep a continued awareness and
stay ahead of events, in order to remain in control of them.
Although the S-O-R paradigm is strongly associated with behaviourism, it
still provides the basis for most description of human behaviour. In the case
of minimal assumptions about what happens in the organism, the S-O-R
paradigm is practically indistinguishable from the engineering concept of a
black box (e.g. Arbib, 1964), whose functioning is known only from
observing the relations between inputs and outputs. The human mind in one
sense really is a black box, since we cannot observe what goes on in the
minds of other people, but only how they respond or react to what happens.
Yet in another sense the human mind is open to inspection, namely if we
consider our own minds where each human being has a unique and privileged
access (Morick, 1971).
That the S-O-R paradigm lives on in the view of the human as an
information processing system (IPS) is seen from the tenets of computational
psychology. According to this view, mental processes are considered as
rigorously specifiable procedures and mental states as defined by their causal
relations with sensory input, motor behaviour, and other mental states (e.g.
Haugeland, 1985) in other words as a Finite State Automaton. This
corresponds to the strong view that the human is an IPS or a physical symbol
system, which in turn ‘has the necessary and sufficient means for general
intelligent action (Newell, 1980; Newell & Simon, 1972). The phrase
‘necessary and sufficientmeans that the strong view is considered adequate
to explain general intelligent action and also implies that it is the only
approach that has the necessary means to do so. In hindsight it is probably
fair to say that the strong view was too strong.
The strong view has on several occasions been met by arguments that a
human is more than an IPS and that there is a need of, for instance,
intentionality (Searle, 1980) or ‘thoughts and behaviour’ (Weizenbaum,
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.68.81