Chapter 2

New Definitions of Old Issues and Need for Continuous Improvement

N. Paltrinieri1,2,  and F. Khan3     1Norwegian University of Science and Technology (NTNU), Trondheim, Norway     2SINTEF Technology and Society, Trondheim, Norway     3Memorial University of Newfoundland, St John's, NL, Canada

Abstract

A number of definitions to describe major accidents and their specific features exist. In particular, several experts have committed to providing specific and effective definition outlining of low-probability, high-impact events, for which classification is particularly challenging owing to their rarity. These events may result from failures in preassessment, knowledge management, or likelihood evaluation, or they may be simply unpredictable. This chapter reports a brief overview of definitions of such extreme events, from atypical accidents to dragon kings, through the popular metaphor of the black swan. To a certain extent, these different perspectives agree on the fact that conjunctions of “small things” have the potential to result in extreme effects. For this reason, this chapter suggests a twofold approach to be adopted for limitation of such events: Well-known small failures should not be disregarded, and continuous improvement of models and classifications should always be carried out to keep track of the ever-changing industrial environment.

Keywords

Atypical accident scenarios; Black swans; Dragon kings; Perfect storms; Unknown unknowns

1. Introduction

Major industrial accidents can be grouped under several definitions on the basis of their specific features. Particular attention has been recently focused on low-probability, high-impact events, which are challenging not only to prevent but also to grasp. In fact, they may be the result of incomplete hazard identification, poor knowledge management, or ineffective likelihood evaluation, or they may be simply unpredictable. This chapter reports a brief overview of definitions of such extreme events, highlighting and discussing similarities, differences, and limitations.

2. Atypical Accident Scenarios

Paltrinieri et al. [1,2] define an accident scenario as atypical when it is “not captured by hazard identification methodologies because deviating from normal expectations of unwanted events or worst case reference scenarios.” An atypical accident may occur when hazard identification does not produce a complete overview of system hazards. This step represents a qualitative preassessment and is the foundation of the whole risk management process. Paltrinieri et al. [1,3] attribute the potential deficiencies of hazard identification results to lack of specific knowledge and low awareness of related risks by analysts and, more in general, safety professionals in an organization. In particular, risk awareness has a primary role, as illustrated in Fig. 2.1.
Fig. 2.1 shows the development of two different cases, both starting from a condition of unawareness of accident risk and lack of related information (point 1). In an ideal case, a reasonable doubt would grow in the minds of safety professionals of an organization once they come across some examples of early warnings, such as unwanted events occurring in their site, historical data, and literature studies from external sources. Thus, their risk awareness would progressively increase with the availability of related information (point 2), which is used to increase the knowledge of the potential scenario to a condition of relative confidence about accident risk (point 3), where only loss of organizational memory may reduce awareness. According to the Table 2.1 definitions, the ideal case depicts proactive management of accident risk, where the accident is initially an “unknown unknown” (point 1) that first evolves into a “known unknown” (point 2) and is finally and successfully classified as a “known known” (point 3).
image
Figure 2.1 Management of accident risk on the basis of awareness and availability of related information in an ideal case and an atypical accident case. Adapted from Paltrinieri N, Dechy N, Salzano E, Wardman M, Cozzani V. Lessons learned from Toulouse and Buncefield disasters: from risk analysis failures to the identification of atypical scenarios through a better knowledge management. Risk Analysis 2012;32:1404–19.

Table 2.1

Definitions of Known/Unknown Events (Used by US Army Intelligence and Made Popular by Donald Rumsfeld [4])

Unknown knowns
Events we are not aware that we (can) know by means of available (but disregarded) information
Known knowns
Events we are aware that we know, for which risk can be managed with a certain level of confidence
Unknown unknowns
Events we are not aware that we do not know, for which risk cannot be managed
Known unknowns
Events we are aware that we do not know, for which we employ both prevention and learning capabilities
In the “atypical accident” case, available information on risk accident is disregarded. The situation develops in a condition of unawareness despite the succession of related early warnings (point 4), and the only development is the passage from the risk of “unknown unknowns” to “unknown knowns” that may be potentially understood through the available information (Table 2.1). It is only when the atypical accident occurs that risk awareness suddenly increases and compensation measures are taken.
The literature shows several examples of atypical accidents that occurred or may have the potential to occur (Table 2.2). The former are cases of hazard identification failure: The respective safety reports identified relatively less critical accident scenarios, despite the occurrence of previous similar events in the past. The latter are related to new and emerging technologies: Related hazards may be well known to specialists but still a gray area for safety professionals, who nonetheless have the opportunity to learn from early warnings.

Table 2.2

Examples of Atypical Accident

Past AccidentsPotential Accidents
Atypical eventToulouse, France, 2001 Ammonium nitrate explosionVapor cloud explosion, Buncefield, UK, 2005Potential accident with carbon capture and sequestrationPotential accident with liquefied natural gas regasification
ExplanationWorst case scenario in safety report: ammonium nitrate storage fireWorst case scenario in safety report: large pool fireIncreased scale of substance handling and relative lack of experience in the identification of related hazards
Early warningsPrevious similar accidents:

• Oppau, Germany, 1921

• Texas City, US, 1947

• Brest, France, 1947

• Red Sea, cargo ship Tirrenia, 1954

Previous similar accidents:

• Houston, US, 1962

• Baytown, US, 1977

• Newark, US, 1983

• Naples, Italy, 1985

• St. Herblain, France, 1991

• Jacksonville, USA, 1993

• Laem Chabang, Thailand, 1999

Examples of past related accidents:

• Natural CO2 releases, lakes Monoun and Nyos, Cameroon, 1984 and 1986

• Cold boiling liquid expanding vapor explosion, Worms, Germany, 1988

Examples of past related accidents:

• Rapid phase transition, Canvey Island, UK, 1973

• Asphyxiation, Oklahoma City, US, 1978

• Rapid phase transition, Bontang, Borneo, 1993

• Boiling liquid expanding vapor explosion, Tivissa, Spain, 2002

References of atypical event study[1,5,6][1,5,6][710][11,12]

image

3. Black Swans

Until the 17th century, all swans known by Europeans were white. However, with the discovery of Australia, the first black swans were sighted, which became the symbol of disproved belief (ie, all swans are white). For this reason, a black swan event is a rare event that has never been encountered before and can be summarized by the following three principles [13]:
• Rarity: a black swan is an outsider because nothing could convincingly point to its possibility.
• Impact: a black swan has extreme consequences.
• Predictability: a black swan can be explained only after the fact and cannot be anticipated.
These principles may lead to the degradation of predictability and the need for robustness against negative black swans and for learning from positive ones. In fact, they are not only adverse events but also rare unplanned opportunities from which we should learn. Taleb [13] affirms that these kinds of events are the result of epistemic limitations (or distortions), demonstrating a clear overlapping with the definition of atypical events [1]. In fact, Aven and Aven and Krohn [14,15] refer to a black swan as a surprisingly extreme event relative to one's belief/knowledge, defining and giving examples of three types of black swans (partially according to the concept of unknowns/knowns previously introduced; see Table 2.1): unknown unknowns, unknown knowns, and events judged negligible.
An example of a black swan of the unknown unknown type is represented by the swine flu spread in 2009 caused by the H1N1 virus, for which a vaccine was quickly developed. In some countries the authorities aimed at vaccinating the entire population. The influenza turned out to be relatively mild; however, the vaccination had severe side effects, which were previously unknown [16]. A black swan of the unknown known type might have been the disaster involving the Deepwater Horizon drilling rig, where a worker did not alert others on the rig as pressure increased in the drilling pipe, a sign of possible gas/fluid entry into the wellbore (kick), which can lead to blowout [17]. The Fukushima nuclear disaster, because of its low probability of occurrence, was judged as a negligible event. This third type of black swan was preceded in past centuries by extreme natural events (tsunamis of heights beyond the design criterion of the nuclear plant), which were not accounted for during the design of the nuclear reactors [18].
Black swans are addressed by Paté-Cornell [18] in comparison with another definition of rare event: the perfect storm. This kind of event involves mostly aleatory uncertainties (randomness) in conjunction with rare but known events. It takes its name from the devastating storm in the northern Atlantic described by Junger [17], which caught some boats by surprise and killed 12 people in 1991. It was the result of the conjunction of a storm from the North American mainland, a cold front from the north, and a tropical storm from the south. Paté-Cornell affirms that black swans represent the ultimate epistemic uncertainty or lack of fundamental knowledge, where not only the distribution of a parameter but also, in the extreme, the very existence of the phenomenon itself is unknown. However, in reality, most scenarios involve both aleatory and epistemic uncertainties, and a clear distinction cannot be outlined.

4. Dragon Kings

Like black swans, the events defined by Sornette [19] as dragon kings are outliers. The term “king” emphasizes the importance of such extreme events, which may be identified as the outliers in a power law distribution. This is an analogy to the wealth of kings, which, if plotted against their subjects' wealth, is beyond the power law distribution. These exceptional events are also defined as “dragons” to stress that they are beyond the normal, with extraordinary characteristics; the presence of these, if confirmed, has profound significance.
The definition of dragon kings also refers to the existence of a transient organization into extreme events that are statistically different from the rest of their smaller siblings. This realization opens the way for a systematic theory of predictability of catastrophes, contrasting with the definition of black swans. The concept is rooted in geophysics [20]. One of Sornette's earlier works was related to the prediction of earthquakes. He saw that some degrees of organization and coordination could serve to amplify fractures, which are always present and forming in the tectonic plates. Organization and coordination may turn small causes into large effects, ie, explosive ruptures such as earthquakes, which are characterized by low probability. This physical model suggests the possibility of predicting such events. In fact, if time between these smaller fracture events decreases with a specific log-periodic pattern, earthquake probabilities are much higher.
Dragon kings may represent an answer to Paté-Cornell [18] and Haugen and Vinnem [21], who warn against the misuse of the concept of the black swan as a reason for ignoring potential scenarios or waiting until a disaster happens to take safety measures and issue regulations against a predictable situation.
An example of dragon king was searched for among the accidents recorded in the MHIDAS (major hazard incident data service) database [22], which collected about 1500 events from 1916 to 1992. The accidents were ranked on the basis of the number of fatalities they caused and plotted in a double logarithmic ranking/fatalities diagram (Fig. 2.2). The distribution of the events follows the power law with regularity, as shown by their good approximation to the straight line in Fig. 2.2. However, the recorded accident beyond the power law, causing the highest number of fatalities, may be the disaster that occurred in Bhopal in 1984, if it is considered to have caused all 16,000 fatalities claimed [23]. (Other sources report the number of fatalities to be around 3400 [24].)
image
Figure 2.2 Rank-ordering plot of the fatalities caused by major hazard incident data service accidents [22].
If the Bhopal disaster can be defined as a dragon king, then an emerging organization of small previous “fractures” into the system of the chemical plant should be recognized. Several authors have identified a “list of things that went wrong simultaneously at Bhopal” [2428]:
• Many safety-related devices were not well designed.
• The plant was losing money, which resulted in staff and maintenance budget cutbacks.
• There was a social system that dismissed safety culture and created extreme tension between management and workers.
• The plant was to close permanently, which affected operator morale and contributed to the lack of maintenance and the bypassing of safety systems.
• There was complete failure or lack of an emergency response program.
• There was ineffective treatment of the injured.
• There were no citations or any awareness of the dangers of this plant.
These fractures led to an extraordinary amplification of the event, turning it into a large-scale effect. Their coordination could potentially have been identified, however, and such an accident predicted, as theorized by Sornette.

5. Small Things

Despite the different shades of their definitions, most of the authors cited in this chapter agree on one point: extreme accidents are the result of a combination of events, which often cannot be classified as extreme or unpredictable. In fact, such causal events may range from repetitive technical failures to common human errors.
This is demonstrated by Paltrinieri et al. [1] in the identification of the chain of events that led to the atypical accident in Buncefield in 2005 (see Table 2.2). For instance, one cause was the frequent oil tank overfilling events as a result of operators often not informing others about the changes in the incoming fuel flow rate [29]. Although Taleb [13] points at the consequentiality of black swans, Aven [30] reports the events and conditions that combined to lead to the Deepwater Horizon accident, previously mentioned as an example of a black swan. Paté-Cornell [18], in describing perfect storms, refers to a “conjunction of known phenomena,” which is “an alignment of factors whose possibilities are known separately but whose simultaneity seems extremely unlikely.” Finally, Sornette [19] bases his theory on the aggregation of relatively smaller events leading to extreme consequences, because “in the theory of complex systems and in statistical physics big disruptions do not need large perturbations to occur.” As complexity of a system increases, it enhances system susceptibility means, and minute changes would have disproportional consequences.
Preventing small things from happening might be the key to breaking this chain of events, because there are always potential accident scenarios that we are not aware we do not know–unknown unknowns. Awareness of their likelihood should always be present, and appropriate precautions to consciously face our relative ignorance (shifting from prevention of unknown unknowns to the prevention of known unknowns) would probably lower the probability of extreme events. Small things might be represented by recurring old issues in a plant or an organization, which do not need imaginative definitions to be prevented but perhaps only compliance with already present procedures. Thus, on one hand, we should avoid excessive focus on uncertainties in risk analysis that may have the side effect of disregarding our consolidated knowledge. On the other hand, new definitions to attribute to major accidents are the expression of the need for continuous improvement and adaptation with increasingly complex systems, such as today's chemical process industry. Emerging technologies, tighter interdependencies with critical infrastructures, and increasing volumes of handled hazardous substances may potentially hide unexperienced risks. Management of such risks requires high levels of risk awareness, opening to dynamic updating, and capability of reorienteering industrial activities toward intrinsically safer conditions. This approach may be described in a few words with “learning from experience,” a relatively small thing (conceptually less complex) with dramatically heighted implications.

6. Conclusions

Low-probability, high-impact events may feature different attributes. Atypical accidents are events that were disregarded in the hazard identification phase owing to a combined lack of awareness and knowledge, underlined by the classification into unknown unknowns and unknown knowns. Although black swans are completely aleatory, dragon kings generate hope regarding the predictability of their formation and occurrence. However, all these classifications of major accidents agree on the relative complexity of their development. In fact, conjunctions of small things and tangled cause-consequence relationships among them have the potential to result in extreme effects. For this reason, a twofold approach should be adopted to limit major accidents. Well-known small failures, the multiplicity of which creates a fertile ground for disasters, should not be disregarded but seriously and comprehensively addressed. At the same time, continuous improvement of the models and classifications employed to keep track of industry evolution and increasing complexity is essential. Such a dynamic approach should be adopted at different levels and include ongoing improvement and integration of risk assessment techniques to progressively approximate real-world conditions.

References

[1] Paltrinieri N, Dechy N, Salzano E, Wardman M, Cozzani V. Lessons learned from Toulouse and Buncefield disasters: from risk analysis failures to the identification of atypical scenarios through a better knowledge management. Risk Analysis. 2012;32:1404–1419.

[2] Paltrinieri N, Dechy N, Salzano E, Wardman M, Cozzani V. Towards a new approach for the identification of atypical accident scenarios. Journal of Risk Research. 2013;16:337–354.

[3] Paltrinieri N, Oien K, Cozzani V. Assessment and comparison of two early warning indicator methods in the perspective of prevention of atypical accident scenarios. Reliability Engineering & System Safety. 2012;108:21–31.

[4] Rumsfeld D. In: News transcript. Washington, DC, USA: U.S. Department of Defense, Office of the Assistant Secretary of Defense (Public Affairs); February 12, 2002.

[5] Paltrinieri N, Cozzani V, Wardman M, Dechy N, Salzano E. Atypical major hazard scenarios and their inclusion in risk analysis and safety assessments. Reliability, Risk and Safety–Back to the Future. 2010:588–595.

[6] Paltrinieri N, Dechy N, Salzano E, Wardman M, Cozzani V. Towards a new approach for the identification of atypical accident scenarios. Journal of Risk Research. 2012;16:337–354.

[7] Paltrinieri N, Wilday J, Wardman M, Cozzani V. Surface installations intended for Carbon Capture and Sequestration: atypical accident scenarios and their identification. Process Safety and Environmental Protection. 2014;92:93–107.

[8] Wilday J, Paltrinieri N, Farret R, Hebrard J, Breedveld L. Addressing emerging risks using carbon capture and storage as an example. Process Safety and Environmental Protection. 2011;89:463–471.

[9] Wilday J, Paltrinieri N, Farret R, Hebrard J, Breedveld L. Carbon Capture and storage: a case study of emerging risk issues in the iNTeg-Risk project. In: Institution of chemical Engineers Symposium Series. 156th ed. 2011:339–346.

[10] Paltrinieri N, Breedveld L, Wilday J, Cozzani V. Identification of hazards and environmental impact assessment for an integrated approach to emerging risks of CO2 capture installations. Energy Procedia. 2013:2811–2818.

[11] Paltrinieri N, Tugnoli A, Cozzani V. Hazard identification for innovative LNG regasification technologies. Reliability Engineering & System Safety. 2015;137:18–28.

[12] Paltrinieri N, Tugnoli A, Bonvicini S, Cozzani V. Atypical scenarios identification by the DyPASI procedure: application to LNG. Chemical Engineering Transactions. 2011:1171–1176.

[13] Taleb N. The black swan: the impact of the highly improbable. New York: Random House; 2007.

[14] Aven T. On the meaning of a black swan in a risk context. Safety Science. 2013;57:44–51.

[15] Aven T, Krohn B.S. A new perspective on how to understand, assess and manage risk and the unforeseen. Reliability Engineering & System Safety. 2014;121:1–10.

[16] Munsterhjelm-Ahumada K. Health authorities now admit severe side effects of vaccination swine flu pandemrix and narcolepsy. Orthomolecular Medicine News Releases. 2012.

[17] Financial post. Deepwater rig worker weeps as he admits he overlooked warning of blast that set off America's worst environmental disaster. 2013. http://business.financialpost.com/2013/03/14/halliburton-worker-weeps-as-he-admits-he-overlooked-warning-of-blast-that-set-off-americas-biggest-oil-spill-in-gulf/?__lsa=42e0-28bb.

[18] Paté-Cornell E. On “black swans” and “perfect storms”: risk analysis and management when statistics are not enough. Risk Analysis. 2012;32:1823–1833.

[19] Sornette D. Dragon-kings, black swans and the prediction of crises. ETH Zurich, Chair of System Design. 2009.

[20] Musgrave G.L, Weatherall J.O. The physics of wall street: a brief history of predicting the unpredictable. Business Economics. 2013;48:203–204.

[21] Haugen S, Vinnem J.E. Perspectives on risk and the unforeseen. Reliability Engineering & System Safety. 2015;137:1–5.

[22] MHIDAS (Major Hazard Incident Data Service). Mhidas database. Harwell, UK: AEA Technology, Major Hazards Assessment Unit, Health and Safety Executive; 2003.

[23] Eckerman I. The Bhopal saga: causes and consequences of the world's largest industrial disaster. Universities Press; 2005.

[24] Labib A. Learning (and unlearning) from failures: 30 years on from Bhopal to Fukushima an analysis through reliability engineering techniques. Process Safety and Environmental Protection. 2015;97:80–90.

[25] Khan F.I, Abbasi S.A. Major accidents in process industries and an analysis of causes and consequences. Journal of Loss Prevention in the Process Industries. 1999;12:361–378.

[26] Yang M, Khan F, Amyotte P. Operational risk assessment: a case of the Bhopal disaster. Process Safety and Environmental Protection. 2015;97:70–79.

[27] Chouhan T.R. The unfolding of Bhopal disaster. Journal of Loss Prevention in the Process Industries. 2005;18:205–208.

[28] Eckerman I. The Bhopal gas leak: analyses of causes and consequences by three different models. Journal of Loss Prevention in the Process Industries. 2005;18:213–217.

[29] Health and Safety Executive. Buncefield: Why did it happen? Bootle, UK: HSE; 2011.

[30] Aven T. Comments to the short communication by Jan Erik Vinnem and Stein Haugen titled “perspectives on risk and the unforeseen”. Reliability Engineering & System Safety. 2015;137:69–75.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.201.209