CHAPTER 17

Hazard management and safety culture

NICK PIDGEON, BARRY TURNER, BRIAN TOFT and DAVID BLOCKLEY

 

Introduction

During the course of the twentieth century the second industrial revolution in Western society has been characterised by the rapid growth and in-stitutionalisation of large-scale technological systems. This growth has, in no small part, been due to the successful application of the physical sciences to a wide range of engineering problems. However, it is perhaps paradoxical that an apparently increased ability to control and manipulate the environment has raised a number of fundamental issues of safety and social acceptability. This is reflected in the fact that since the early 1970s the question ‘How Safe is Safe Enough?’, and in particular the need to define what is regarded as acceptable risk (see e.g. Fischhoff et al 1981) has become a central focus of concern to individuals and society. Concern in the early 1990s shows no sign of abating given recent catastrophic failures in high-technology systems, such as the Bhopal disaster, the loss of the Space Shuttle Challenger, Three Mile Island, Chernobyl, the Herald of Free Enterprise, and Piper Alpha. Such incidents have served to focus the attention of the public, the media, and regulators upon the risks associated with high-technology systems.

Much of the core agenda for the risk acceptability debate has been framed by the response of the engineering community to these issues. Clearly it would be impractical, if not in many case unethical, to expect to be able to learn about low frequency/high consequence events merely upon the basis, ex post, of operating experience and observed accidents. As a result of this the engineering response has focused upon the formal ex ante appraisal of the potential threats to the integrity of high-risk systems. This in turn has fostered the development of the discipline of probabilistic risk assessment. The results of major risk assessments provide increasingly dominant inputs to facility siting and risk acceptability decision-making, as well as to the difficult tasks of defining appropriate management and control procedures for risky activities.

An alternative response to the problems posed by high risk activities, and one which motivates the present chapter, focuses upon the critical importance of analysing the range of social factors that may potentially contribute to the onset of hazardous situations. This issue is relevant because human agency will be involved at all stages of the design, construction and use of any technological system. Rather than analyse the malfunctions of such systems purely in technical and environmental terms, a response stressed by traditional engineering education (Blockley 1980), a socio-technical framework may be necessary if we are to address adequately the ill-structured problems posed by high-risk activities. Furthermore, we would argue that a total approach to emergency planning involves two interrelated aspects. The first of these is the traditional role of providing for the amelioration of the effects of disasters. The second, however, involves the analysis of patterns of behaviour that are potentially identifiable prior to disasters, and through this the formulation of guidelines for prevention.

This chapter is concerned with the issue of disaster prevention. One question which is immediately raised is ‘what behaviour is appropriate to a social network in which people are co-operating in the planning, construction and operation of technological systems, where multiple uncertainties, and potentially high losses are present?’. Posing such a question raises a number of deep philosophical issues, particularly with respect to the normative role of social science research, which we do not wish to address here. The current discussion will be limited to the concept of safety culture, its origins in the aftermath of the Chernobyl accident, and its potential as a diagnostic aid to defining good and poor safety performance in organisational contexts.

Socio-technical hazards

It is becoming increasingly clear that it is restrictive to talk of failures in large-scale technological systems purely in technological terms. Individuals, their organisations and groups, and ultimately their cultures, are all implicated in the design, construction, operation, and monitoring of technological systems. Consequently, it is no surprise to find a number of investigators pointing to the critical role of human agency in the generation of disasters (Turner 1978, Blockley 1980, Perrow 1984, Kletz 1985). In popular accounts of disasters, particularly in press discussions immediately following such events, such agency is often described as ‘human error’. However, while this notion is important in western societies which have a longstanding habit of allocating blame on an individual basis, the attempt to find responsible individuals may well be at odds with the more subtle causes of failure. The latter, we have argued elsewhere (Pidgeon and Turner 1986, Pidgeon 1988, see also Reason 1987) are typically complex, multiple, and rooted in the social and organisational properties of the overall socio-technical system associated with any hazardous technology.

The notion of a socio-technical system stresses the close interdependence of both technological hardware and the individuals and their social arrangements involved with a technology. Both social and technical components interact with, and over time change each other in complex and often unforeseen ways. The value of viewing human-made hazards in these terms comes from the fact that we are forced to look beyond overly narrow technical explanations whenever an accident does occur, to consider as well the human and organisational preconditions to failure. Despite some early work (e.g. Turner 1978), it is only in the late 1980s that the spotlight of accident research has finally swung fully upon the socio-technical system. If we adopt a socio-technical perspective with respect to disasters then it is clear that such events invariably have in common the following general characteristics: specifically, that the causes are multiple over time, qualitatively diverse, and compounded in complex interactive ways (Pidgeon 1988).

Turner (1978) first discussed the multiple aspects to disaster causes. As a result of an analysis of a wide range of major accidents in the United Kingdom, he concluded that such events rarely came about for any single reason. Rather, it is typical to find that a number of undesirable events contribute to an ‘incubation period’, which is often to be measured in years. Turner's disaster model focuses in particular upon the informational difficulties associated with the attempts of individuals and organisations to deal with uncertain and ill-defined safety problems. These difficulties are often compounded with technical malfunctions and operational errors. Typically the result is an unnoticed situation that is counter to the accepted beliefs about hazards, and to established safety norms and procedures (see also Reason's (1989) discussion of latent failures). Eventually, this situation is revealed when the onset of disaster is precipitated by a trigger event, which might be a final critical error, or a slightly abnormal operating condition.

The qualitative diversity of disaster causes becomes apparent when we adopt a socio-technical perspective, encompassing several interrelated levels of analysis (see Pidgeon and Turner 1986). Some individual causes will be primarily technical in origin (technical level) while others will be predominantly behavioural. With respect to the latter it is useful to define a spectrum of problems of behaviour, ranging from simple individual slips or lapses (the individual level), through to those associated with managerial activities, either involving inter- and intra-group communication patterns (the small-group level), or those more deeply rooted in large-scale organisational information systems and dispositions (the predisposing institutional level).

The concept of interactive complexity is discussed by Turner (1978) and Perrow (1984). Their accounts suggest that disaster results from unanticipated and complex interactions between sets of contributory causes that would be unlikely, singly, to defeat established safety systems. Serious difficulties are associated with the detection of such interactions, in part as a consequence of the functional ambiguity and complexity of large socio-technical systems. While such events are always clear in hindsight, their prediction in foresight presents a much more difficult task.

All of the documented difficulties in formally dealing with risk (e.g. Blockley 1980, Fischhoff et al 1981, Vlek and Stallen 1980) are compounded further when we adopt a socio-technical perspective with respect to hazards. It follows from a consideration of the multiple and complexly interactive nature of accident causes that the detailed pathways to disaster in any specific instance may be uniquely unpredictable, an observation that has led Wagenaar and Groeneweg (1987) to characterise such events as ‘impossible accidents’. This factor is further compounded by the problems of dealing predictively with the qualitative diversity of causes, and in particular with those of human and organisational origin. Human reliability analysis may have had modest success in attaching probability estimates to individual errors such as operator slips or lapses. However, how one attempts the probabilistic assessment of cognitive failings such as operator misdiagnosis, small-group events such as communication breakdown, or the more fundamental predisposing factors such as the bounds to a procedure laid down in a code of practice is an open question (Pidgeon et al 1986, 1987). If we combine the technical engineering uncertainties with such social uncertainties we can see that the task of modelling the risk associated with a socio-technical system will often be a daunting one.

Fundamentally, the multiple uncertainties associated with large-scale hazards ensure that risk decisions are decisions being made under ignorance (Collingridge 1980). As Collingridge suggests, we should always be prepared for decisions made under ignorance to be in error, in the sense that ‘the decision-maker may discover new information which, given his objectives, shows that some other option would have served him better than the one he has chosen’ (Collingridge 1980, p 29). There is, of course, an ultimate paradox presented here, in the sense that one can never know completely what one does not know! It is therefore an important policy implication that upon completion of any formal risk assessment all decisions based upon it should still be suspected for error. This recognition in turn focuses our attention back to the processes that are generating hazardous situations within socio-technical systems, and the need for adequate hazard management. Living with the fact that formal risk assessment only provides a partial view (Blockley 1990) and that decisions about risk will almost certainly be in error, leads us to the realisation that risk prediction should always be complemented by strategies for the ongoing control of safety.

The Chernobyl accident and safety culture

The term safety culture entered the technical literature in the aftermath of Chernobyl. The Chernobyl disaster in April 1986 was typical of many such socio-technical failures in that the final catastrophic event could be traced to a combination of events: principally, design deficiencies and a number of procedural errors committed by the team operating the reactor. The reactor type that failed at Chernobyl is known as the RBMK, a Russian acronym for pressure tube reactor. This type of reactor suffered from an inherent generic design flaw, such that under certain conditions of low power operation the energy output of the core could become unstable, potentially leading to a ‘fail-unsafe’ condition. This difficulty had been anticipated by the reactor designers, and consequently the operating procedures for the plant specified that the reactor should not ordinarily be run at the critical low power levels. However, in order to conduct an experiment (ironically with the long-term goal in view of improving the reactor safety systems) the operators not only allowed the reactor power level to fall into the critical region, but also simultaneously disengaged a number of the automatic safety systems.

The public response of the Western nuclear industry in attempting to come to terms with the accident, and in particular in justifying why a Chernobyl-like accident would be unlikely to happen in the West, focused upon two distinct issues. The first of these was the relatively narrow technical question of the reactor design characteristics. So, for example, a United Kingdom Atomic Energy Authority report characterised the event as a truly impossible accident by concluding that:

‘The Chernobyl accident was so unique to the Soviet RBMK reactor design that there are very few lessons for the United Kingdom to learn from it. Its main effect has been to reinforce and reiterate the importance and validity of UK safety standards. A large scale reactor accident of the type that occurred at Chernobyl could not happen in the United Kingdom.’ (U.K. Atomic Energy Authority 1987, 5.44.)

Whether we accept such a claim is a matter of interpretation. Clearly, at a very detailed level of analysis it can be substantiated, since there are no reactors in the West of precisely this type with precisely the same operating and design characteristics. However, to adopt such a narrow interpretation is to miss opportunities for generalised learning. Given that at a general level of analysis many disasters display similar characteristics (Turner 1978) it is clear that every incident has the potential to provide lessons (Toft 1987). In addition to this, however, the adoption of a strictly technical interpretation of the Chernobyl accident leads to a more fundamental oversight; specifically of the human errors committed by the operators involved. If we accept the assumption that many patterns of behaviour do not respect national boundaries then, as Reason (1987) correctly points out, the human and organisational pathologies that lay behind the Chernobyl disaster are likely to be alive and well in Western nuclear organisations. The notion of ‘safety culture’ arose initially as a rationale for the public claim that similar procedural errors and omissions could not in fact be repeated in the West. Commenting upon the implications of the Chernobyl accident for the recently concluded Layfield report (1987) arising from the Sizewell B inquiry, the then UK Secretary of State for Energy, Mr Peter Walker, stated in Parliament that:

‘In relation to the Chernobyl accident, the chief inspector of nuclear installations has advised me that the PWR (Pressurised Water Reactor) design for Sizewell B is of a different reactor type from the Soviet RBMK design. All nuclear power stations in the United Kingdom, unlike those in the Soviet Union, must have engineered control and automatic protection systems. Moreover, our system of regulation, unlike that which applied in the Soviet Union, ensures that there is a proper and reliable procedural framework of controls. All our experience in the United Kingdom has demonstrated that there is a superior safety culture to that which apparently existed at Chernobyl, and which allowed the repeated deliberate noncompliance with safety procedures. The chief inspector of nuclear installations therefore advises me that the Chernobyl accident does not call for any reconsideration of the recommendations of the Layfield report. I agree with his advice.’ (Atom 1987, p 36.)

This comment, drawing upon a European analysis of the implications of Chernobyl (see also Organisation for Economic Cooperation and Development Nuclear Agency 1987) focuses attention upon the social and cultural elements which contribute to safety, or if these are deficient, to disaster.

A number of significant questions are raised by this. For instance, what is a safety culture; how could such a culture be located or identified; and what features would it have? Furthermore, since a normative element is implicit in the original statement, we might also wish to ask what a ‘good’ safety culture is? In particular, what characteristics might it have; are they amenable to investigation and specification; and do they offer an aid to individuals involved in planning to reduce the hazards and the costs which arise from deficient safety provisions? The resolutions to some of these issues might then permit us to pose the comparative questions of particular interest to the Western nuclear, and other high-technology, industries. In the remainder of the chapter we turn to the difficult task of making a tentative characterisation of safety culture.

New directions: characterising safety culture

If we accept the model of disaster causation that stresses the ‘incubation’ of disaster preconditions, we might expect that significant, general social and organisational factors could be identified at an early stage of the disaster development sequence. In this respect the notion of safety culture is an appealing concept, since it provides us with a global characterisation for some of the more elusive background preconditions to disaster. However, it is also clear that such preconditions are unlikely to be easy to identify in foresight. As Olson (1987) rightly comments, the further back in the accident causation chain one attempts to look, such as at production or economic pressures upon safety, or at corporate attitudes towards acceptable risks, the more uncertain one is that outcome-relevant performance is being measured.

In a very general sense the concept of ‘culture’ is widely used in social science and a multiplicity of definitions are available. For present purposes we wish to regard a culture as the collection of beliefs, norms, attitudes, roles and practices of a given group, organisation, institution or society. The investigation of culture crucially involves the exploration of meaning and of systems of meaning, since in its most general sense, culture refers to the array of systems of meaning through which a given people understand the world. Such a system specifies what is important to them, and explains their relationship to matters of life and death, family and friends, work and danger. It is also possible to think of the culture of small groups of workers, of departments, of divisions and organisations as being nested successively within one another, and then to consider the organisational culture as being located within a national or international framework (Turner 1971). A culture is also a set of assumptions and practices which permit beliefs about topics such as danger and safety to be constructed. A culture is created and recreated as members of it repeatedly behave in ways which seem to them to be the ‘natural’, obvious and unquestionable ways of acting, and as such will serve to construct a particular version of risk, danger and safety. Such versions of the perils of the world will also embody culturally distinctive explanatory schemes (Wilkins 1989), which will provide appropriate accounts of accidents, and how and why they happen.

We might broadly define safety culture as those sets of norms, roles, beliefs, attitudes and social and technical practices within an organisation which are concerned with minimising the exposure of individuals to conditions considered to be dangerous. In a socio-technical system of any size, the understanding of potential hazards will inevitably be imperfect. This in turn points to the importance of the limits to, and social construction of, our knowledge. Within organisational settings there will be a range of uncertainties associated with the production and reproduction of both legitimate knowledge and, correspondingly, domains of ignorance (see Smithson 1985, 1989). The literature on safety is full of very clearly defined and well-understood hazards which have, for various reasons, become neglected. The lorry driver who attempts to drive a high vehicle under a clearly marked low bridge would be an example of the neglect of an evident hazard. A similar example would be the failure to close the bow doors on a passenger ferry before it puts to sea. Other hazards, while knowable in principle, may not be appreciated at a given instant by the personnel who need to know within a large organisation (Turner 1978). And some events, even given the best resources and scientific expertise available, will remain ill-defined; for example, is global warming a reality (Douglas 1988)? Certain hazards may be said to be ‘culturally unavailable’ in the sense that they are extremely difficult to recognise or define within a given cultural context. An example of the latter would be a failure resulting from a mode of behaviour unrecognised within the existing engineering paradigms, as happened in the case of the Tacoma Narrows bridge collapse (see Blockley 1980).

Recognition of the importance of the limits to knowledge, and in particular the existence of uncertainty and incompleteness, sets a framework for specifying the characteristics of a good safety culture, and the mapping out of a research agenda for future studies of organisations and hazards. The products of such research might help us to characterise both safe and unsafe organisations, although it is important to add here the caveat that intervention in the social world is likely to generate unanticipated consequences. Measures designed with the goal of promoting safety (for example, the experiments at Chernobyl) might under some circumstances serve to undermine it. A corollary to this is that the specification of safe practice must necessarily have a provisional quality. Tried and tested practice may always have to change in response to the discovery of previously unforeseen accident pre-conditions.

Four very general characteristics can be tentatively advanced as potential facets of a corporate safety culture. These are the provision to practitioners of feedback from incidents; the existence of an appropriate safety organisation; the existence of comprehensive and institutionalised norms and rules for handling safety problems; and the generation of an appropriate set of beliefs and assumptions constituting a corporate attitude towards safety.

The first facet, that of feedback, typically operates at an industry-wide rather than an organisation-specific level, and might be categorised under the more general headings of informational management and reflection upon current practice. Its critical role arises because it is common to find that the general patterns displayed by particular disasters have already been noted in similar incidents. In particular, near-misses often differ from actual disasters only by the intervention of chance factors. Such events can, with the benefit of hindsight, be interpreted as ‘warning signals’, and should always be taken seriously. In some contexts, such as the aviation industry (Hall and Hecht 1979), a high premium is placed upon the analysis and dissemination of incident data obtained on a ‘no-fault’ reporting basis. A contrasting case is provided by our own studies of safety in structural engineering (Turner et al 1989). Here it became clear that the construction industry, which suffers in the United Kingdom from a very poor safety record, has little formal provision for the collection and dissemination of incident data. When dealing with risk and safety, the value of formal arrangements to provide incident feedback such that practitioners can reflect fully upon ongoing operations cannot be overemphasised.

The notion of safety organisation relates to the presence or absence within a particular corporate setting of formal structures and individuals charged with dealing with safety; for example, safety sections and committees, safety officers and risk analysts. The principal motivation for such provision is to ensure that the appraisal of hazards is conducted, as far as is possible, independently of the risk-generating processes. The suggestion here extends beyond that of merely providing the infrastructure for ‘health and safety’ at work. Rather, safety management is a task that requires a process of constant ongoing monitoring, as a means of evaluating potential and significant uncertainties and incompleteness in knowledge, as well as unanticipated changes to the system and its operating environment. This would involve the functions of uncertainty analysis, the evaluation of the potential relevance of incident data generated by internal and external feedback systems, as well as the appraisal of human factors and management safety performance. An example of inadequate safety organisation is provided in the case of National Aeronautical and Space Administration (NASA) and the destruction of the Space Shuttle Challenger. The U.S. House of Representatives Report (1986) on the accident documents the demise in NASA in the period following the successful Apollo programme in the 1960s and 1970s of the organisational safety structure, in terms of diminished personnel, remit, resources, and power within the organisation. It is significant here that the development and operation of the space shuttle system was conducted in the absence of any formal mechanism for uncertainty analysis of potential flight risks.

The norms and rules governing safety within an organisation, whether explicit or tacit, are at the heart of safety culture. As corporate guidelines for action, these will shape the perceptions and actions of the individuals in the organisation in particular ways. In an ideal world one might attempt to specify a set of complete, up-to-date, and practical contingencies for tackling all foreseeable hazards. However, in specifying rules defining legitimate areas of corporate responsibility there is a tension between the need to cope both with known hazards and those that are beyond the boundaries of current knowledge. The inflexible application of existing rules to guard against the former might lead to the oversight of the latter (a form of cognitive mind-set, or ‘groupthink’; Janis 1972). Being alert to pre-defined hazards requires appreciation of the individual and organisational difficulties that tend to conceal and distort significant available information. Being alert to initially unseen hazards sets a more demanding task, but probably involves a willingness to monitor ongoing technologies in diverse ways; to research, and accept uncertainty and incompleteness of knowledge as facts of life; to be prepared to solicit opinions from outsiders; and to exercise ‘safety imagination’. With respect to the latter, the demanding and creative task of assessing the available ‘intelligence’ about hazards (Pidgeon 1988), might be facilitated, as Spooner (see Chapter 8) suggests, by independent safety ‘think tanks’.

The final fact, that of safety attitudes, refers to individual and group beliefs about hazards and the importance of safety, together with their motivation to act upon those beliefs. The best information system, safety organisation, and rules and norms for dealing with hazards, will be rendered impotent if individuals are not sufficiently motivated to act when a safety problem is first identified. If safety norms are seen as a corporate responsibility, then so should be the task of motivating individuals to take action. Rather than viewing risk control as' something that the safety organisation is capable of dealing with alone, it should be seen as an objective that involves everyone, with a bearing upon the long-term health of the enterprise (Coletta 1988). A minimal requirement here would be for senior management to hold a realistic view of the short- and long-term hazards entailed by the organisation's activities. More generally there is a requirement for the rules and norms relating to hazards to be supported and endorsed throughout the organisation. In this sense concern with safety needs to be ‘representative’ of organisation members, not imposed in a punitive manner by one group on another (Gouldner 1954). Only in this way is it possible to move towards a state in which the recognition of the necessity and desirability of the rules provides a motivation to conform to them in spirit as well as according to the letter. Under such circumstances everybody in the organisation would regard the policing of hazards as a personal as well as a collective goal.

Concluding comments

We have argued that in an emergency planning context disaster prevention should be accorded a similar priority to the more traditional task of the amelioration of disaster consequences. This view stems from a model of disasters which characterises them not as chance events, but as socio-technical phenomena that display similar patterns of preconditions. A socio-technical perspective focuses our attention upon the human and organisational causes of disasters, and the need to specify appropriate social arrangements for dealing with hazards. In this respect it seems likely that in the near future more attention will need to be paid to the design of safe organisations (Turner 1989). Whether such a goal is a realistic aim remains an open question. However, as one step in this direction the concept of safety culture has been introduced, and a number of potential facets of this outlined. The notion of safety culture is as yet underspecified, but would appear to capture the current concern with the role of social and organisational factors in the generation of disasters. Meeting the research agenda implicit in the concept of safety culture would be a suitable task in itself for the next decade of risk and safety research.

REFERENCES

Atom (1987). Ministerial statement on the Sizewell B nuclear power station, Atom, May, 367: 36.

Blockley, D.I. (1980). The Nature of Structural Design and Safety. Ellis-Horwood, Chichester.

Blockley, D.I. (1990). Open-world problems in structural reliability. In: A. Ang, M. Shinozuka and G. Schueller (eds), Structure Safety and Reliability, Vol 3: Proceedings of ICOSSAR 5. American Society of Civil Engineers, New York.

Coletta, G.C. (1988). The importance of risk control for long-term stability and growth, Advanced Management Report, 7(11): 6–8.

Collingridge, D. (1980). The Social Control of Technology. Open University Press, Milton Keynes.

Douglas, M. (1988). A Typology of culture. Paper presented to Gemeinsamen Kongress Kultur und Gesellschaft, Zurich, 4–7 October 1988.

Fischhoff, B., Lichtenstein, S., Slovic, P., Derby S.L. and Keeney R.L. (1981). Acceptable Risk. Cambridge University Press, Cambridge.

Gouldner, A.W. (1954). Patterns of Industrial Bureaucracy. Free Press, Glencoe, Ill.

Hall, D.W. and Hecht, A.W. (1979). Summary of the characteristics of the air safety system reporting database, Ninth Quarterly Report NASA, TM 78608, 23–34.

Janis, I.L. (1972). Victims of Groupthink. Houghton Mifflin, Boston, MA.

Kletz, T.A. (1985). An Engineer's View of Human Error. IChemE, Rugby.

Layfield, F. (1987). Sizewell B Public Inquiry. HMSO, London.

Olson, J. (1987). Measuring safety performance of potentially dangerous technologies. Industrial Crisis Quarterly, 1: 44–53.

Organisation for Economic Cooperation and Development Nuclear Agency (1987). Chernobyl and the Safety of Nuclear Reactors in OECD Countries. Organisation for Economic Cooperation and Development, Paris.

Perrow, C. (1984). Normal Accidents, Basic Books, New York.

Pidgeon, N.F. (1988). Risk assessment and accident analysis. Acta Psychologica, 68: 355–368.

Pidgeon, N.F., Blockley, D.I. and Turner, B.A. (1986). Design practice and snow loading: lessons from a roof collapse, The Structural Engineer, 64A: 67–71.

Pidgeon, N.F., Blockley, D.I. and Turner, B.A. (1987). Discussion of ‘Design practice and snow loading’, The Structural Engineer, 65A: 236–240.

Pidgeon, N.F. and Turner, B.A. (1986). ‘Human error’ and socio-technical system failure. In: A.S. Nowak (ed.), Modelling Human Error in Structural Design and Construction, 193–203. American Society of Civil Engineers, New York.

Reason, J.T. (1987). The Chernobyl errors, Bulletin of the British Psychological Society, 40: 201–206.

Reason, J.T. (1989). The contribution of latent human failures to the breakdown of complex systems, Philosophical Transactions of the Royal Society of London, B, 327: 475–484.

Smithson, M. (1985). Toward a social theory of ignorance, Journal for the Theory of Social Behaviour, 15: 151–172.

Smithson, M. (1989). Ignorance and Uncertainty. Springer-Verlag, Berlin.

Toft, B. (1987). Learning the lessons of disasters. Paper presented at the Practical Achievements of Engineering Reliability Conference, Leeds, April 1987.

Turner, B.A. (1971). Exploring the Industrial Subculture. Macmillan, London.

Turner, B.A. (1978). Man-Made Disasters. Wykeham, London.

Turner, B.A. (1989). How can we design a safe organisation? Paper presented at the Second International Conference on Industrial Organisation and Crisis Management, New York University, 3–4 November 1989.

Turner, B.A., Blockley, D.I. and Pidgeon, N.F. (1989). Engineering failures: development of a safety knowledge-base for construction projects. Final Report to Joint Committee of SERC/ESRC. February 1989.

U.K. Atomic Energy Authority (1987). The Chernobyl Accident and its Consequences. (UKAEA Report NOR 4200). HMSO, London.

U.S. House of Representatives (1986). Investigation of the Challenger Accident. (House Report 99–1016). U.S. Government Printing Office, Washington DC.

Vlek, C. and Stallen, P.J. (1980). Rational and personal aspects of risk, Acta Psychologica, 45: 273–300.

Wagenaar, W.A. and Groeneweg, J. (1987). Accidents at sea: multiple causes and impossible consequences, International Journal of Man-Machine Studies, 27: 578–98.

Wilkins, L. (1989). The risks outside and the pictures in our heads: Connecting the news to people and politics. Paper presented at ANU Conference on Risk Perception and Response in Australia, Australian Counter Disaster College, Mount Macedon, Victoria, 5–7 July 1989.

SECTION SUMMARY IV:

Promising avenues

 

Aspects of the chapters in this section reinforced many of the issues raised earlier in the volume. Murray and Wiseman's observations on the degree of preparedness by many, perhaps all, major hospitals in the UK for dealing with the victims of toxic hazards is disturbing. Equally disturbing in view of the Towyn flood in North Wales in February 1990, is Green's prediction in Chapter 12 and written in September 1989, of a collapse of a sea wall followed by major urban flooding. But more importantly contributions to this section indicate some directions which promise, or at least suggest, the way towards improving hazard management and emergency planning. These directions take the form of techniques or mechanisms and principles. The principles concern emergency planning as well as much broader society-wide approaches to improving safety.

Techniques and mechanisms

The most obvious technique is the use of computer technology to enhance information storage, exchange and accessibility. The National Poisons Unit, together with other related national and international projects, described by Murray and Wiseman in Chapter 15, offers one model for this end. Most importantly it offers a positive model for maximum information exchange. This is an important principle of emergency planning and management and one which is often ignored. Links are being pursued in a number of areas: between national agencies and professional groups; the creation of new links between national organisations; and the possibility of a centre to act as a clearing house for information, and as a provider of training resources and follow-up capability regarding exposed populations. The Poison Unit's development plans include the definition of roles and responsibilities, and a built-in evaluation of incident handling. The last point illustrates another important principle of hazard management and emergency planning: that of an organisation which is designed to learn rapidly. This point is reinforced in Chapter 17 in which Pidgeon et al view ‘feedback’ on incidents, and learning therefrom, as most important.

Another promising and arguably under-used technique for hazard management and emergency planning is scenario construction. Whilst scenarios are already widely used by emergency planners to rehearse emergency response and in training, in Chapter 14 Penning-Rowsell and Winchester suggest that this technique can also be helpful in raising the risk awareness of the public and agency managers. By suggesting this approach Penning-Rowsell and Winchester identify a poorly understood concept: risk communication. In so doing they also introduce an important principle of hazard management: the need for explicit design and use of risk communication systems. Risk communication is far from problem-free, as Penning-Rowsell and Winchester state, but progress can be made by employing their ‘golden rules’.

Principles

Some approaches to emergency management

Chapters discussing emergency planning and management do so at the conceptual rather than the operational level. Interestingly from a United States perspective, Kreps observes that the push for emergency planning tends to come from the top down, but contributions to Section II suggest that currently the opposite tends to be the case in Britain. Drawing upon a detailed examination of hundreds of case studies Kreps stresses two principles for effective emergency management: improvisation and preparedness (Chapter 11). He stresses that communities are already good at adapting and improvising, and emergency planners should aim to capitalise upon this characteristic. Green makes the same point by stressing that hazard and emergency management is likely to be most effective when hazard managers and emergency planners seek to enable the public to adapt as effectively as possible to threats or events.

At a more general level Handmer (Chapter 16) discusses the conceptual framework for hazard and emergency management adopted in Australia. This includes the need for an ‘all hazards’ approach, the comprehensive approach (prevention, preparedness, response and recovery), an integrated ‘all agencies’ approach, and prepared communities. The Australian approach certainly has its problems, but the development of a conceptual framework has helped identify priorities and organisational weaknesses. Priorities would include hazard analysis, and weaknesses would include hazard prevention.

The integration of hazard and emergency management

A strong theme in the chapters in this section is the need to integrate the management of hazards with that of emergencies. In Chapter 12 Green points out that emergency management is needed when hazard management fails. Hazards may be viewed as providing the causative mechanism for disaster, while disasters themselves are simply the consequence of hazards. Of course, the vulnerability of the community in question may well be an important factor in causation: an important point stressed by Lewis (Chapter 13). By focusing on the disaster as an event we are unlikely to reduce the chances of the next one occurring. Instead, Lewis suggests, we should analyse the vulnerability of communities and devise vulnerability reduction measures.

Other chapters have commented upon the highly reactive response to hazard and disaster, with a piecemeal approach to safety legislation, weak enforcement of existing regulations, a far too cosy relationship with industry, and barriers to effective learning. Murray and Wiseman's proposals offer an opportunity to identify chemical hazards and to develop effective mitigation measures. The Australian counter-disaster conceptual framework attempts to formalise the linkage by giving weight to the four elements of prevention, preparedness, response, and recovery. Recently, some Australian states have restructured their counter-disaster organisational arrangements to give more emphasis to hazard identification and reduction.

Involving the local community

Where relatively minor events are concerned, or where the community is very large, response will be dominantly by professional full-time emergency service workers; although initial response is often by unhurt victims and members of the public at the scene. Conversely, for exceptionally large events, or where the community is very small and isolated, response will be dominantly by members of the public. For example, after major earthquakes local people do the vast majority of rescue before outside specialist teams arrive – although it is these teams which atract media attention. Here the time factor is important. Local people are available immediately whether communication and transport routes are open or not. It is something of a paradox that the more emergency services become professionalised and salaried, the less the community is involved in counter-disaster activities. Unfortunately, this is likely to reduce the capacity of the public at risk to deal with disasters themselves. A very positive element of Australian counter-disaster arrangements is seen to be the integration of the community through volunteers. This is particularly the case with rural fire brigades.

The discussion in this sub-section has concentrated upon response, but the affected public is also critical for effective implementation of the other phases of disaster management. Preparedness requires a psychologically prepared population, and recovery must occur at the individual, household and company levels as well as at the institutional level.

The development of a safety culture

Discussion of the importance of individuals leads us straight into consideration of the overall cultural context driving the actions of individuals and organisations. In an earlier chapter, Spooner argues the case for increased corporate responsibility in an age of deregulation (see Chapter 8). In this section Pidgeon et al advocate the development of a ‘safety culture’, whereby safety would receive a relatively high priority at every level of an organisation in both the formal and informal arenas. To achieve this may require some significant changes to aspects of British society and government. Some of these aspects were discussed by Hood and Jackson in Chapter 9. These might include the need to free up information flow and to reduce the ‘culture of secrecy’ permeating government. This last issue is discussed further in the final chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.30.236