Chapter 5

Natural Hazard Characterization

G. Lanzano*
A. Basco**
A.M. Pellegrino
E. Salzano
*    National Institute of Geophysics and Volcanology (INGV), Milan, Italy
**    AMRA, Analysis and Monitoring of Environmental Risk, Naples, Italy
    Engineering Department, University of Ferrara, Ferrara, Italy
    Department of Civil, Chemical, Environmental, and Materials Engineering, University of Bologna, Bologna, Italy

Abstract

Major natural events must be characterized in terms of their hazard, that is, their ability to cause significant harm. Although there have been attempts to predict the occurrence of natural events, uncertainties still exist with respect to large-scale natural events due to their complexity. Furthermore, for Natech risk analysis, a simplified event characterization is needed. This chapter focuses on the characterization of selected natural hazards which were found to be relevant with respect to impacts at industrial installations, that is, earthquakes, floods, and, due to recent events in Japan, tsunamis.

Keywords

natural hazard
natural disaster
Natech risk
earthquake
flood
tsunami
Major natural events must be characterized in terms of their hazard, that is, their ability to cause significant harm. Although there have been attempts to predict the occurrence of natural events, uncertainties still exist with respect to large-scale natural hazards due to their complexity. Furthermore, for Natech risk analysis, a simplified event characterization is needed. This chapter focuses on the characterization of selected natural hazards which were found to be relevant with respect to impacts at industrial installations, that is, earthquakes, floods, and, due to recent events in Japan, tsunamis.

5.1. Introduction

Natural events can affect the integrity of industrial installations and cause damage, business interruption, and subsequent economic losses, but they can also induce major accidents due to the release of energy or materials into the atmosphere (Salzano et al., 2013Krausmann et al., 2011). In this framework, earthquakes, floods, lightning, tsunamis, and storms (hurricane, tornado), as well as other natural-hazard phenomena, such as intense rain or extreme temperatures, must be characterized in terms of their hazard, that is, their ability to cause harm in significant amounts, and to possibly overwhelm the capacity of industrial and public emergency-response systems. These issues are a matter of concern for public authorities and the population. Indeed, since the beginning of human civilization, there have been attempts to predict the occurrence of natural events. Therefore, the body of scientific literature, guidelines, standards, and historical databases on natural events, and more specifically on the hazard they represent, is large. For each of the cited events, the knowledge in the hands of scientists and practitioners is nowadays sufficiently large for the development of efficient measures and systems to mitigate the associated risks.
Despite these observations, and in the light of Natech risk reduction, public authorities, industry managers, and risk analysts often need a simplified characterization and representation of natural phenomena. This allows a proactive development and adoption, in the early design phase, of structural and organizational protection measures. To this end, a characterization with only 1 or 2 degrees of freedom for each natural phenomenon of concern is needed.
This requirement was implemented by Cornell (1968), in his pioneering work on earthquake and reliability engineering, who along with Luis Esteva developed the field of Probabilistic Seismic Hazard Analysis (PSHA) for earthquakes which is nowadays used worldwide (Baker, 2008). Following his general methodology, the hazard value (or hazard curve) H for the given natural event (nat) may then be expressed as the cumulative probability of exceeding a threshold intensity value λnat, over the time interval T as:

Hnat(λnat,T)=φλnat>α|T

image(5.1)
where α is the reference intensity value and the function φ is evaluated by the Poisson law for the global annual rate of occurrence Rtot of rare events which is independent of time, that is:

φ=1eRtotT

image(5.2)
The variable Rtot is typically represented by the summation over the standard, normal, cumulative probability distribution functions of each natural- event scenario, and it includes information on the mean and the standard deviation:

Rtotλnat>α=scenario1σ2πλnateλnatμ2/2σdλnat

image(5.3)
Where industrial equipment is concerned, hazard curves could be calculated for a time T that corresponds to the Technical Service Life (TSL) of equipment. In some cases, designers consider the Functional Service Life (FSL) which is lower than the TSL (ISO, 2000). However, the first option satisfies the need for conservative choices to be on the safe side. Eventually, the following expression may be adopted:

Hnat(λnat,TSL)=φλnat>α|TSL

image(5.4)
Often, 50 years is assumed as the TSL. This choice is not only technical. Indeed, the 10% exceedance probability of occurrence of λnat in 50 years is associated with the mean earthquake recurrence interval of 475 years. Hence, a 475-year mean recurrence interval has tended to become a widely used benchmark value for acceptable risk.
The annual exceedance probability of occurrence, Rtot, is often calculated using a Weibull equation in terms of the number n of historical observations corresponding to a given intensity or magnitude, expressed as m, and it represents per se a measure of the hazard:

Rtotλnat>α=n+1m

image(5.5)
From this equation, the return period (or the recurrence interval) TR, which is the average number of years between the occurrence of two events of equal (or greater) magnitude, can be derived. It is the reciprocal of Rtot.
These issues will be extensively discussed for the most important natural events of concern in this book: earthquake, tsunami, and flooding. For each of these events, this chapter will evaluate the main hazard intensity parameters and the related limitations and uncertainties. A similar approach can be adopted for any other natural-hazard phenomenon.

5.2. Prediction and Measurement

5.2.1. Earthquake

Earthquakes are seismic movements of the ground that are mainly caused by tectonic activities. The interaction between earthquakes and equipment can result in large damage when hazardous industrial installations are involved (Campedel et al., 2008). Clearly, Natech risk reduction needs a multidisciplinary effort that involves (1) the definition of the occurrence probability of a given earthquake intensity, (2) the structural analysis of the equipment interaction with the seismic forces, (3) the analysis of the specific response of the industrial process after structural damage of the equipment and its supports has occurred, and finally (4) the analysis of risks in global terms, following classical methodologies.

5.2.1.1. Hazard Parameters of Concern

With specific reference to earthquakes and given a hazardous industrial site of interest, ground motion is the main variable to take into account. More specifically, the measured ground motion refers to seismic waves radiating from the earthquake focus to the hazardous site and is related to the earthquake source, the path of the seismic wave from the source to the site, and to the specific geomorphologic characteristics of the site. Moreover, the earthquake characteristics include energy, frequency contents, phases, and many other variables that can affect the structural response of buildings and other structures.
Currently, the problem of defining effective and reliable predictors for the seismic response behavior of structures is one of the main topics of earthquake engineering. Empirical vulnerability analyses are often carried out in terms of peak ground acceleration, PGA (or alternatively in terms of peak ground velocity, PGV), as this parameter is relatively easy to infer by earthquake intensity conversion (Panico et al., 2016Lanzano et al., 2013 2014aSalzano et al., 2003 2009). Furthermore, several databases on historical damage due to earthquakes are usually related to the PGA of the earthquake [e.g., the pipeline damage database provided by Lanzano et al. (2015)]. Moreover, the calculation of this parameter from typical earthquake magnitude metrics (e.g., the Modified Mercalli scale or the Richter scale) is straightforward.
Local and national authorities usually provide tools for PSHA (Bommer, 2002Cornell, 1968) both in Europe and in the USA. Hence, the exceedance probability of PGA occurrence calculated over 1 year or on a 50-year basis is nowadays available. The exceedance probability curve is the general reference function for structural-design purposes. As a matter of fact, seismic loads are usually determined from the maximum PGA of an earthquake at the site of interest over a given time period (the TSL or other reference time intervals given by standards or legislation).
The PGA data from the Global Seismic Hazard Assessment Project (GSHAP), expressed as percentage of the earth’s gravity, g, is typically used as a reference for the earthquake hazard map. The GSHAP has produced a homogeneous seismic hazard map for horizontal PGAs representative of stiff site conditions, with a 10% chance of exceedance in 50 years. In addition, hazard curves and classifications can be found in local information systems. In this context, the US Geological Survey is a good source of seismic hazard maps (http://www.usgs.gov).
A discussion of the secondary natural hazards related to earthquakes, such as soil liquefaction and ground displacement, is outside the scope of this chapter, although these phenomena can produce significant damage to industrial equipment (Lanzano et al., 2014b; Krausmann et al., 2010).

5.2.1.2. Probabilistic Seismic Hazard Analysis

PSHA is a probability-based framework to evaluate the seismic hazard (Baker, 2008Kramer, 1996). According to Baker (2008), PSHA is composed of the five steps described in Fig. 5.1. This approach aims to identify the annual rate of exceeding a given ground-motion intensity by considering all possible earthquakes and the associated ground motions together with their occurrence probabilities, thereby avoiding the definition of a worst-case ground-motion intensity which is not without difficulty.
image
Figure 5.1 Schematic of the Five Basic Steps in Probabilistic Seismic Hazard Analysis According to Baker (2008)
(A) Identify earthquake sources, (B) characterize the distribution of earthquake magnitudes from each source, (C) characterize the distribution of source-to-site distances from each source, (D) predict the resulting distribution of ground motion intensity, (E) combine information from A–D to calculate the annual rate of exceeding a given ground motion intensity.
For Natech risk analysis it is important to know the ground shaking at a hazardous site. In order to predict the ground motion, the distance distribution from earthquakes to the site of interest needs to be modeled. Baker (2008) notes that for a given earthquake source, an equal occurrence probability is generally assumed at any location on the fault. If locations are uniformly distributed, the distribution of source-to-site distances by using the geometry of the source is straightforward.
For any earthquake, if IM is the ground-motion intensity measure of interest (such as PGA), the natural logarithm of IM is normally distributed. Hence, at any distance r from the earthquake source with magnitude m, the probability of exceeding any PGA level x can be evaluated by using the corresponding mean and standard deviation σ:

PIM>xm,r=1ΦlnxlnIM¯σlnIM

image(5.6)
where Φ is the standard normal cumulative distribution function (Baker, 2008). Considering the number of possible sources, nsources, we can then integrate over all considered magnitudes and distances in order to obtain λ (IM > x), that is, the rate of exceeding IM:

λ(IM>x)=i=1nsourcesλ(Mi>mmin)mminmmax0rmaxP(IM>x|m,r) fMi(m) fRi(r) dr dm

image(5.7)
where λ (Mi > mmin) is the rate of occurrence of earthquakes greater than mmin from the source i, and fM(m) and fR(r) are the probability density functions (PDFs) for magnitude and distance.
Occasionally, the results of a PSHA are expressed in terms of return periods of exceedance, that is, the reciprocal of the rate of occurrence, or as probabilities of exceeding a given ground-motion intensity within a specific time window for a given rate of exceedance. The latter is calculated under the assumption that the probability distribution of time between earthquakes is Poissonian. The probability of observing at least one event in a period of time t is therefore equal to (Baker, 2008):

P(at least one event in time t)=1eλtλt,

image(5.8)
where λ is the rate of event occurrence as defined earlier. For an exhaustive discussion of PSHA the reader is referred to Baker (2008) and Kramer (1996).
For design applications, there has emerged a convention to consider the probability of a ground-motion level within a given design life (TL) of a structure in question [t = TL in Eq. (5.8)]. Hence for a given probability and design life, the seismic hazard can also be expressed in terms of the return period TR given by:

TR=TLln1P

image(5.9)
It is common to see seismic hazard defined in terms of P and TL in the current codes and provisions, for example, in Eurocode 8 (CEN, 2004).
For some critical structures, such as nuclear power plants, the design ground motion may be more commonly expressed as an annual probability or frequency of being exceeded (i.e., TL = 1 year). Under International Atomic Energy Association regulations (IAEA, 2003), a typical design criterion for critical elements of a nuclear power station is a ground motion with an annual probability of being exceeded of 10−4. Using Eqs. (5.8) and (5.9), this corresponds to approximately a 1% probability of being exceeded in 100 years (or a 0.5% probability of being exceeded in 50 years).
More generally, the performance-based seismic design (PBSD) formalizes the approach of citing multiple objectives for structures to withstand minor or more frequent levels of shaking with only nonstructural damage, while also ensuring life-safety and the avoidance of collapse under severe shaking (ATC, 1978). These objectives define the limit states, which describe the maximum extent of damage expected to the structure for a given level of ground motion.
Although there are different definitions of limit states, a 475-year return period (corresponding to P = 10% in TL = 50 years) is commonly adopted as a basis for ensuring “life-safety.” However, several codes have recently begun to adopt 2475 years (corresponding to P = 2% in TL = 50 years) as the return period for the no-collapse criterion, even though it is subsequently rescaled to incorporate an assumed inherent margin of safety against collapse (NBCC, 2005NEHRP, 2003). Longer return periods may be considered for critical structures, but this kind of analysis would require that uncertainties be treated carefully.
The 2009 revision to the NEHRP Provisions introduces a new conceptual approach for the definition of the input seismic action (NEHRP, 2009). The seismic input (maximum considered earthquake) is modified by a risk coefficient (for both short and long periods) which is derived from a probabilistic formulation of the likelihood of collapse (Luco et al., 2007). These modifications change the definition of seismic input to ensure a more uniform level of collapse prevention.
For most applications, the hazard is described in terms of a single parameter, that is, the value of the reference PGA on type A ground, which corresponds to rock or other rock-like geological formations. As an example, Fig. 5.2 shows the European Seismic Hazard Map (ESHM) which illustrates the probability to exceed a level of ground shaking in terms of the PGA in a 50-year period. The illustrated levels of shaking are expected to be exceeded with a 10% probability in 50 years, which corresponds to a return period TR of 475 years.
image
Figure 5.2 European Seismic Hazard Map in Terms of Exceeding a Peak Ground Acceleration (PGA) With a Probability of 10% in 50 Years ©SHARE project, http://www.share-eu.org/
European legislation (CEN, 2004) prescribes the use of zones for which the reference PGA hazard on a “rock” site (ag) is assumed uniform. Many seismic codes are moving away from this particular practice, choosing instead to define the hazard directly for the site under consideration (NEHRP, 2003 2009NTC, 2008NBCC, 2005), or allowing for interpolation between contour levels of uniform hazard.
As an example and to give an order of size of the acceleration levels in Europe, for a medium-high seismicity area like Italy four seismic zones are defined (OPCM, 2003) according to the value of the maximum horizontal peak ground acceleration ag whose probability of exceedance is 10% in 50 years (Table 5.1). Subsequently, in the new Italian Building Code (NTC, 2008) the Italian Hazard Map (called MPS04) was defined for each single location and is therefore site specific.

Table 5.1

Seismic Zonation in Italy According to the OPCM (2003)

Seismic Zone Ground Acceleration (g) With Probability of Exceedance Equal to 10% in 50 Years (ag) Acceleration (g) of the Elastic Response Spectrum (ag) at period T = 0
1 >0.25 0.35
2 >0.15–0.25 0.25
3 0.05–0.15 0.15
4 <0.05 0.05

5.2.2. Tsunami

A tsunami (lit. “harbor wave”) is a series of long-period water waves caused by the displacement of a large volume of water generated by undersea earthquakes, volcanic eruptions, aerial landslides, and other disturbances above or below water. The occurrence of tsunamis can cause major (direct and indirect) losses in terms of human lives and infrastructure damage as seen recently in Japan (2011), Chile (2010), and on the coastlines of the Indian Ocean (2004) (Suppasri et al., 2013Mas et al., 2012Koshimura et al., 2009). From an industrial-safety point of view, port structures are particularly vulnerable.
Based on historical tsunami observations, the vast majority of tsunamis are induced by earthquakes. The resulting waves have small amplitude (the wave height above the normal sea surface), and a very long wavelength (often hundreds of kilometers), whereas normal ocean waves have a height of roughly 2 m and a wavelength of only 100–200 m. Tsunami waves can travel at speeds of over 800 km/h in the open sea. Due to the large wavelength, the wave takes 20–30 min to complete a cycle and has an amplitude of less than 1 m. For this reason, tsunamis are difficult to detect in deep water.
Like other types of waves, tsunami waves have a positive (ridge) and negative peak (trough). If the ridge arrives first, a huge breaking wave or sudden and quick flooding on land occurs. The resulting temporary rise in sea level is called run-up. Run-up is measured in meters above a reference sea level. If the tsunami trough arrives first, the shoreline recedes as the water is drawn back. A large tsunami may exhibit multiple waves arriving over a period of hours, with significant time between the wave crests. It is interesting to note that the first wave to reach the shore may not have the highest run-up (Nelson, 2012).
Tsunamis can cause damage via two mechanisms: (1) the force of a wall of water traveling at high speed slamming into coastlines and structures, and (2) the drag forces of a large water volume that recedes from the land, thereby carrying a large amount of debris with it. This latter phenomenon can occur even with waves that do not appear to be large. From an engineering point of view, the tsunami action needs to be evaluated in terms of loading on structures, and proper intensity parameters must be considered. In the common design practice, the relevant parameters are the maximum water height, hw and the maximum water velocity, vw.
Probabilistic tsunami hazard analysis (PTHA) is similar to the widely used PSHA for earthquakes. The basic approach is to combine the rate at which tsunamis are generated with the distribution of amplitudes that are expected to occur at the site for a given tsunami. The probabilistic tsunami hazard from earthquakes is given by (PGEC, 2010Rikitake and Aida, 1988):

νEQKWtsu>z=i=1NFLTNiMminmLocfmiMfLociLocPWtsu>zM,LocdMdLoc

image(5.10)
where νEQK (Wtsu > z) is the annual rate of tsunami wave heights exceeding z, NFLT is the number of tsunamigenic fault sources, Ni (Mmin) is the rate of earthquakes with magnitude greater than Mmin for source i, fm and fLoc are PDFs for the magnitude and rupture location, and P(Wtsu > z|M,Loc) is the conditional probability of the tsunami wave height, Wtsu, exceeding the test value z.
Assuming the tsunami wave heights are log-normally distributed, the conditional probability of exceeding wave height z is given by (PGEC, 2010):

P(Wtsu>z|M,Loc)=1Φln(z)ln(W^tsu(M,Loc))σEQK

image(5.11)
where W^tsu(M,Loc)image is the median wave height, σEQK is the aleatory variability of the tsunami wave height from earthquakes (e.g., standard deviation) in natural logarithm units, and Φ is the cumulative normal distribution.
If only a small number of representative scenarios (magnitude and location) are considered, then the tsunami hazard from earthquakes simplifies to (PGEC, 2010):

νEQK(Wtsu>z)=i=1NFLTj=1NSirateijP(Wtsu>z|Mij,Locij)

image(5.12)
where rateij is the rate of occurrence of scenario j from source i.
The tsunami wave heights are calculated using numerical simulations. A site-specific PTHA involves the production of a full source-to-site numerical tsunami simulation on a high-resolution digital elevation model (DEM) for each considered potential source scenario. As a result, the computational cost of site-specific PTHA would generally be almost unaffordable in practice. Hence, specific strategies are being developed for reducing the computational burden (Geist and Lynett, 2014). These strategies are typically based on crude approximation methods extrapolating inland the offshore wave heights, and/or using an oversimplification of the seismic source variability and applying a cruder selection of the relevant seismic sources (Thio et al., 2010). These procedures are therefore affected by very large epistemic uncertainties.
A novel methodology to reduce the computational cost associated with a site-specific tsunami hazard assessment for earthquake-induced tsunamis has recently been presented by Lorito et al. (2015). It allows the performance of high-resolution inundation simulations on realistic topobathymetry only for the relevant seismic sources.

5.2.3. Floods

The US Federal Emergency Management Agency (FEMA) defines flooding as a general and temporary condition of partial or complete inundation of normally dry land areas from the overflow of inland or tidal waters, from the unusual and rapid accumulation or runoff of surface waters from any source, and from mudflows. Most floods fall into three major categories: riverine, coastal, and shallow flooding. Alluvial fan flooding is another type of flooding more common in mountainous areas (Pellegrino et al., 2015 2010).
Flood hazard maps are available in many regions in Europe and the United States. Often, those maps report the number of observed floods with specific magnitude in the areas of concern over a given time interval, thus following a Weibull analysis. Table 5.2 shows a proposal for a flood hazard classification as reported by the European Spatial Planning Observation Network (ESPON, 2006).

Table 5.2

Flood Hazard Classification Based on Observations Over a Period of 15 Years (ESPON, 2006)

Number of Observed Major River Floods at NUTS3 Level Hazard Classes Definition
0 1 Very low hazard
1 2 Low hazard
>1 –≤2 3 Moderate hazard
>2–≤3 4 High hazard
>3 5 Very high hazard
The identification of flood-prone areas requires the collection and analysis of historical flood data, the availability of accurate digital elevation data, water discharge data, and stream cross-sections located throughout the watershed (Baban, 2014). In Europe, this data is available only for certain case-study areas. So far, the mapping of flood-prone areas in Europe does not follow a consistent approach, and there are several approaches in different catchment areas or riverbeds.
For the purpose of Natech risk analysis, the area affected by flooding, or alternatively, the intensity of the phenomenon, are identified by the maximum water depth (hw) expected at the hazardous industrial site and/or the maximum expected water velocity (vw). These two parameters are highly dependent on the flood scenario considered. Hence, different return times are usually assessed for each specific flood scenario.
In several European countries, flood hazard maps showing the maximum expected water depth and velocity given the return time of the flood event are available. Three categories of water impact are typically defined. With respect to water velocity these are: (1) slow submersion (negligible water velocity), (2) low-speed wave (water velocity ≤ 1 m/s), and (3) high-speed wave (water velocity > 1 m/s). Concerning water height the categories are: (1) low height (0.5 m, no damage expected), (2) intermediate height (1 m/s, damage expected), and (3) high water height (1.5 m, extensive damage expected).
Table 5.3 shows a hazard classification that is based on maximum water depth and velocity. If both values are available, those leading to the worst-case classification should be adopted. Naturally, such a hazard characterization is dependent on the time of return selected for the reference flood events.

Table 5.3

Flood Hazard Classification Based on Water Depth and Velocity

Hazard Index Hazard Classification Water Depth (m/s) Water Velocity (m/s)
1 Very low ≤0.5 ≤0.2
2 Low >0.5–1 >0.2–0.5
3 Moderate >1–1.5 >0.5–1.0
4 High >1.5 >1.0

If data concerning maximum water depth and water velocity are not available, a general natural hazard index with specific reference to Natech risks can be obtained from historical data. The approach originally developed by the European Spatial Planning Observation Network (ESPON, 2006) can be adopted to obtain a general hazard characterization based on a 50-year observational period. The resulting hazard matrix is shown in Table 5.4.

Table 5.4

Flood Hazard Classification Based on Number of Floods Observed in 50 Years

Hazard Index Hazard Classification Number of Observed Floods (Year−1)
1 Very low 0
2 Low 1–3
3 Moderate 4–6
4 High >7

Adapted by the European Spatial Planning Observation Network (ESPON, 2006)

If flood maps are available that only show the maximum extent of the flooded area for a given return time, a simplified estimation of the maximum water depth and velocity can be obtained as follows:
The maximum water depth may be assumed as the difference between the height of the soil at the boundary of the flooded zone and the mean height of the site of interest.
The mean velocity of a gravity-driven flow in rough open channels and rivers can be estimated using Manning’s empirical formula:

v=1ns1/2R2/3

image(5.13)
where v (m/s) is the mean velocity of the flow, s (m/m) is the slope of the channel if the water depth is constant, R (m) is the hydraulic radius of the cross-section of the channel (defined as the area of the cross-section of the channel divided by the length of the wetted perimeter, which is easily determined assuming a simplified trapezoidal shape of the channel), and n is a roughness coefficient that can be related through standard values to the river characteristics (e.g., n = 0.030 for clean and straight rivers, n = 0.035 for major rivers, and n = 0.040 for sluggish rivers with deep pools).
The FEMA Flood Map Service Center (MSC) is the official public source for flood hazard information produced in support of the National Flood Insurance Program (NFIP) in the USA (https://msc.fema.gov/). MSC produces official flood maps and gives access to a range of other flood hazard products. Generally, three main approaches are taken into consideration to address the risks due to flooding:
Statistical studies to determine the probability and frequency of high discharges of streams that cause flooding.
Analytical models and maps to determine the extent of possible flooding when it occurs in the future.
Monitoring storms and snow levels to provide short-term flood prediction, since the main causes of flooding are abnormal amounts of rainfall and sudden thawing of snow or ice.

5.2.3.1. Probability and Frequency of Flooding

If data on stream discharge is available over an extended period of time, it is possible to determine the flood frequencies for any given stream. Starting from historical observations, statistical analysis can be used to determine how often a given discharge or stage of a river is expected. This allows the definition of a return period or recurrence interval and the probability of a given discharge in the stream for any year. The yearly maximum discharge of a stream from one gauging station over a sufficiently long period of time is needed for this analysis.
As a first step in the determination of the recurrence interval, the yearly discharge values are ranked (Nelson, 2015). Each discharge is associated with a rank, m, with m = 1 assigned to the maximum discharge over the years of record and m = n attributed to the smallest discharge whose rank is equal to the number of years over which there is a record, n. Using the following Weibull equation, the number of years of record, n, and the rank for each peak discharge are then used to calculate the recurrence interval, R:

R=n+1m

image(5.14)
Knowing the recurrence interval and the yearly discharge, these two quantities can be combined in a plot that allows the determination of the expected peak discharge for floods with specific return periods. An example of such a plot is shown in Fig. 5.3 for the Red River of the North gauging station at Fargo, North Dakota.
image
Figure 5.3 Frequency of Flooding: Relation Between the Peak Discharge for Each Year Versus Recurrence Interval of the Red River in Fargo, North Dakota From E.M. Baer (2007).
The probability, Pe, of a specific stream discharge occurring in any year can be calculated using the inverse of Eq. (5.14). Pe is also called the annual exceedance probability (Nelson, 2015):

Pe=mn+1

image(5.15)
The probability that one or more floods occurring during any period exceed a given flood severity can be calculated using the following equation:

Pt=1(1Pe)·n

image(5.16)
where Pt is the probability of occurrence over the entire time period, n.

5.2.3.2. Flood Maps

Flood hazard maps illustrate the areas susceptible to flooding when a river exceeds its banks due to different discharge scenarios. Coupled with topographic information and supported by satellite images and aerial photography of past flood events, these maps can be created using historical data on river stages and discharge levels of previous floods (Nelson, 2015). Fig. 5.4 shows a hazard map for a hypothetical 10–20-year, a 100-year, and a 200-year flood for a region in Germany crossed by the Elbe River. While flood hazard maps contain information on the magnitude and likelihood of a flood event, flood risk maps also include information on the potential consequences of flooding.
image
Figure 5.4 Hazard Map for Hypothetical 10–20-Year, 100-Year, and 200-Year Inundation Scenarios for the River Elbe Bundesanstalt für Gewässerkunde, Germany, http://geoportal.bafg.de/mapapps/resources/apps/HWRMRL-DE/index.html?lang=de
There are different methods to quantify flood hazards and risks which result in different types of flood maps as illustrated in Fig. 5.5. Flood hazards can be evaluated using methods of lower or higher complexity which depend on the available data, resources, and time. Nevertheless, de Moel et al. (2009) indicate that the conceptual framework behind the calculation of flood hazards is general and in principle follows three steps:
1. Estimation of the discharge levels for specific return periods. Most commonly, hydrological models are used to calculate discharges. These models require spatially explicit and comprehensive knowledge on meteorological conditions, soil, and land cover. Alternatively, discharge levels can be determined by frequency analyses of discharge records and fitting of extreme-value distributions, or in case this information is not available, from precipitation records using runoff coefficients.
2. Translation of discharge levels into water levels once discharges and their associated return periods have been derived. This is usually accomplished using so-called rating (stage-discharge) curves or with 1D or 2D hydrodynamic models.
3. Determination of the inundated area (and—if possible—also of the flood depth) by combining water levels with a DEM. This procedure yields a flood map that shows either flood extent or depth.
image
Figure 5.5 Conceptual Framework for Flood Hazard and Risk Mapping for a Hypothetical Case Courtesy of de Moel et al. (2009).
Flood extent maps are the most common flood hazard maps. They show the inundated areas for a specific scenario which can either be a historical or a hypothetical flood event with a specific return period, for example, once in 50 or 100 years (Fig. 5.6B). When the extent of a flood is known for specific return periods and a DEM is available, the flood depths can be easily derived. This process results in a flood depth map (Fig. 5.6C).
image
Figure 5.6 Different Flood Map Types Based on a Hypothetical Case
(A) Historical flood map, (B) flood extent map, (C) flood depth map, (D) flood danger map, (E) qualitative risk map, (F) quantitative risk (damage) map. Courtesy of de Moel et al. (2009).
Generally, in flood hazard maps the inundation area and the related water depths are normally considered the most important parameters for estimating the flood’s adverse consequences. However, other parameters, such as water velocity, the duration of the flood, or the rate of water rise can also be crucial depending on the circumstances of the flood. In particular with respect to Natech risks, the water velocity can be a determining factor as the vulnerability of hazardous equipment toward still water or fast water flows differs significantly.

5.2.3.3. Flood Forecasting and EU Floods Directive

Flood forecasting allows the alerting of authorities and the population of imminent flood conditions so that they can take appropriate protective actions. The most accurate flood forecasts use long time-series of historical data that relates stream flows to measured past rainfall events. A coupling procedure of historical information with real-time observations is needed in order to make the most accurate flood forecasts (Pellegrino et al., 2010 2015) together with radar estimates of rainfall and general weather-forecasting techniques. The intensity and height of a flood can be predicted with good accuracy and significant lead-time if high-quality data is available. Flood forecasts typically provide parameters like the maximum expected water level and the likely time of its arrival at specific locations along a waterway. There are several regulations, initiatives, and programs in place that support flood forecasting and general risk reduction from flooding. Some selected examples are provided as follows.
In many countries, urban areas prone to flooding are protected against a 100-year flood, that is, a flood that has a probability of around 63% of occurring in any 100-year period of time. The United States National Weather Service (NWS) Northeast River Forecast Center (NERFC) (http://www.weather.gov/nerfc/), which is a part of the US National Oceanic and Atmospheric Administration (NOAA), assumes for flood forecasting in urban areas that it takes at least 1 in. (25 mm) of rainfall in about 1 h to start significant ponding of water on impermeable surfaces. Many River Forecast Centers of the NWS routinely issue Flash Flood Guidance and Headwater Guidance, which indicate the amount of rainfall that would need to fall in a short period of time to cause flooding (http://www.srh.noaa.gov/rfcshare/ffg.php).
In addition, the United States follows an integrated approach to producing real-time hydrologic forecasts which are available from the NWS (http://water.weather.gov/ahps/). This approach uses, for example, data on real-time, recent and past streamflow conditions from the U.S. Geological Survey [http://waterwatch.usgs.gov/, different community collaborative observing networks (http://www.cocorahs.org/)] and automated weather sensors, the NOAA National Operational Hydrologic Remote Sensing Center (http://www.nohrsc.noaa.gov/) etc. combined with quantitative precipitation forecasts (QPF) of expected rainfall and/or snow melt. This forecasting network also includes various hydroelectric companies.
The Global Flood Monitoring System (GFMS) is an experimental system funded by NASA which maps flood conditions quasi worldwide (http://flood.umd.edu). Users anywhere in the world can use GFMS to determine when floods might occur in their area. GFMS uses precipitation data from NASA’s Global Precipitation Measurement (GPM) mission, a system of Earth observing satellites. Rainfall data from GPM is combined with a land surface model to determine how much water is soaking into the ground, and how much water is flowing into streamflow. Users can view statistics for rainfall, streamflow, water depth, and flooding every 3 h, at each 12-km grid point on a global map. Forecasts for these parameters are 5 days into the future. The resolution of the produced inundation maps is 1 km.
The FloodList project reports on all major flood events occurring to raise awareness of flood risks and the associated potential severe consequences (http://floodlist.com). FloodList has become an authoritative source of up-to-date information on flood events.
In response to several recent extreme flood events, the European Floods Directive on the assessment and management of flood risks requires EU Member States to develop Flood Risk Management Plans (European Union, 2007). These plans need to set appropriate objectives for the management of flood risk and reduce potential adverse consequences on a number of risk receptors. The Directive prescribes a three-step procedure to achieve its objectives:
Preliminary Flood Risk Assessment: The Floods Directive requires Member States to engage their government departments, agencies, and other bodies to draw up a Preliminary Flood Risk Assessment. This assessment has to consider impacts on human health and life, the environment, cultural heritage, and economic activity.
Risk Assessment: The information in this assessment will be used to identify the areas at significant risk which will then be modeled in order to produce flood hazard and risk maps. These maps include detail on the flood extent, depth, and level for three risk scenarios (high, medium, and low probability).
Flood Risk Management Plans: These plans are meant to indicate to policy makers, developers, and the public the nature of the risk and the measures proposed to manage these risks. However, they are not formally binding (e.g., with respect to land-use planning). The Floods Directive prescribes an active involvement of all interested stakeholders in the process. The management plans are to focus on prevention, protection, and preparedness. Also, flood risk management plans shall take into account the relevant environmental objectives of Article 4 of the EU Water Framework Directive (European Union, 2000).
The Global Flood Awareness System (GloFAS, http://www.globalfloods.eu) was jointly developed by the European Commission and the European Centre for Medium-Range Weather Forecasts (ECMWF) in the United Kingdom. The system, which is independent of administrative and political boundaries, links state-of-the-art weather forecasts with a hydrological model. Since GloFAS has a continental scale set-up it provides downstream countries with information on upstream river conditions as well as continental and global overviews.

5.3. Limitations, uncertainties, and future impacts of climate change

Other natural phenomena could be included in the list of events discussed earlier. Many are strongly affected, either in terms of their frequency or intensity, by climate change (IPCC, 2007). Severe weather phenomena, such as heat and cold waves, tornadoes, cyclones (typhoons or hurricanes) but also intense rainfall, water sprouts, and extreme winds can cause major Natech events as discussed in Chapter 3. These phenomena have been loosely analyzed and only few detailed studies are available on the vulnerability of industrial equipment toward these natural hazards.
Finally, it is important to note that uncertainties in the assessment of natural hazards are often caused by a lack of knowledge or understanding of the underlying physical processes. In case of large-scale natural events, there is insufficient knowledge of event frequencies and severity parameters, as well as present and future boundary conditions. Expert judgment can be used to take account of these epistemic uncertainties which might nevertheless be difficult to limit, especially with respect to event or scenario probabilities (Beven et al., 2015).

References

ATC, 1978. Tentative provisions for the development of seismic design regulations of buildings. Report ATC 3-06, Applied Technology Council.

Baban SMJ. Managing Geo-Based Challenges: World-Wide Case Studies and Sustainable Local Solutions. Cham: Springer; 2014: p. 58.

Baer EM. Teaching quantitative concepts in floods and flooding. Northfield, MN: Science Education Resource Center at Carlton College (SERC); 2007.

Baker JW. An Introduction to Probabilistic Seismic Hazard Analysis (PSHA), White Paper, Version 1.3. California: Stanford University; 2008.

Beven KJ, Aspinall WP, Bates PD, Borgomeo E, Goda K, Hall JW, Page T, Phillips JC, Rougier JT, Simpson M, Stephenson DB, Smith PJ, Wagener T, Watson M. Epistemic uncertainties and natural hazard risk assessment—Part: 1 A review of the issues. Nat. Hazards Earth Syst. Sci. 2015;3:7333.

Bommer J. Deterministic vs. probabilistic seismic hazard assessment: an exaggerated and obstructive dichotomy. J. Earthquake Eng. 2002;6:43.

Campedel M, Cozzani V, Garcia-Agreda A, Salzano E. Extending the quantitative assessment of industrial risks to earthquake effects. Risk Anal. 2008;28:1231.

CEN, 2004. Eurocode 8 Design of Structures for earthquake resistance, European Committee for Standardization, Brussels.

Cornell CA. Engineering seismic risk analysis. Bull. Seismol. Soc. Am. 1968;58:1583.

de Moel H, van Alphen J, Aerts JCJH. Flood maps in Europe—methods, availability and use. Nat. Hazards Earth Syst. Sci. 2009;9:289.

ESPON, 2006. The Spatial Effects and Management of Natural and Technological Hazards in Europe—ESPON 1.3.1, European Spatial Planning Observation Network. www.espon.eu

European Union. Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community action in the field of water policy. Off. J. Eur. Commun. 2000; L327/43.

European Union. Directive 2007/60/EC of the European Parliament and of the Council of 23 October 2007 on the assessment and management of flood risks. Off. J. Eur. Union. 2007; L288/27.

Geist EL, Lynett JP. Source processes for the probabilistic assessment of tsunami hazards. Oceanography. 2014;27:8693.

IAEA, 2003. Seismic Design & Qualification for Nuclear Power Plants - Safety Guide, Safety Standards Series No. NS-G-1.6, International Atomic Energy Agency, Vienna.

IPCC, 2007. In: Pachauri, R.K., Reisinger, A. (Eds.), “Climate Change 2007 Synthesis Report” Contribution of Working Groups I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Geneva, Switzerland.

ISO, 2000. ISO 15686-1 Buildings and constructed assets—service life planning—Part 1 General principles, International Organization for Standardization, Geneva.

Koshimura S, Oie T, Yanagisawa H, Imamura F. Developing fragility functions for Tsunami270 damage estimation using numerical model and post-tsunami data from Banda Aceh, Indonesia. Coastal Eng. J. 2009;51:243.

Kramer SL. Geotechnical Earthquake Engineering. Upper Saddle River, New Jersey: Prentice Hall; 1996.

Krausmann E, Cozzani V, Salzano E, Renni E. Industrial accidents triggered by natural hazards: an emerging risk issue. Nat. Hazards Earth Syst. Sci. 2011;11:921.

Krausmann E, Cruz AM, Affeltranger B. The impact of the 12 May 2008 Wenchuan earthquake on industrial facilities. J. Loss Prev. Process Ind. 2010;23:242.

Lanzano G, Salzano E, Santucci de Magistris F, Fabbrocino G. Seismic vulnerability of natural gas pipelines. Reliab. Eng. Syst. Saf. 2013;117:73.

Lanzano G, Salzano E, Santucci de Magistris F, Fabbrocino G. Seismic vulnerability of gas and liquid buried pipelines. J. Loss Prev. Process Ind. 2014;28:72.

Lanzano G, Santucci de Magistris F, Fabbrocino G, Salzano E. Seismic damage to pipelines in the framework of Natech risk assessment. J. Loss Prev. Process Ind. 2015;33:159.

Lanzano G, Santucci de Magistris F, Salzano E, Fabbrocino G. Vulnerability of industrial components to soil liquefaction. Chem. Eng. Trans. 2014;36:421.

Lorito S, Selva J, Basili R, Romano F, Tiberti MM, Piatanesi A. Probabilistic hazard for seismically induced tsunamis: accuracy and feasibility of inundation maps. Geophys. J. Int. 2015;2001:574.

Luco, N., Ellingwood, B.R., Hamburger, R.O., Hooper, J.D., Kimball, J.K., Kircher, C.A., 2007. Risk-targeted versus current seismic design maps for the conterminous United States. In: Proceedings of the 2007 Convention of the Structural Engineers Association of California (SEAOC).

Mas E, Koshimura S, Suppasri A, Matsuoka M, Matsuyama M, Yoshii T, Jimenez C, Yamazaki F, Imamura F. Developing Tsunami fragility curves using remote sensing and survey data of the 2010 Chilean Tsunami in Dichato. Nat. Hazards Earth Syst. Sci. 2012;12:2689.

NBCC National Building Code of Canada. Ottawa, Canada: Institute for Research Construction, National Research Council of Canada; 2005.

NEHRP, 2003. NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures—Part I Provisions, US Federal Emergency Management Authority, Washington, DC.

NEHRP, 2009. NEHRP Recommended Seismic Provisions for New Buildings and Other Structures. Part I—Provisions, US Federal Emergency Management Authority, Washington, DC.

Nelson SA. Flooding Hazards, Prediction & Human Intervention. New Orleans, USA: Tulane University, Department of Earth and Environmental Sciences; 2015 http://www.tulane.edu/∼sanelson/Natural_Disasters/floodhaz.htm.

Nelson SA. Tsunami. New Orleans, USA.: Tulane University, Department of Earth and Environmental Sciences; 2012 http://www.tulane.edu/∼sanelson/Natural_Disasters/tsunami.htm.

NTC, 2008. Italian Building Code, DM 14, 2008.

OPCM, 2003. Ordinanza del Presidente del Consiglio dei Ministri n. 3274 del 20 marzo 2003, Gazzetta Ufficiale della Repubblica italiana n.105 dell’8 maggio 2003, Primi elementi in materia di criteri generali per la classificazione sismica del territorio nazionale e di normative tecniche per le costruzioni in zona sismica.

Panico, A., Basco, A., Lanzano, G., Pirozzi, F., Santucci de Magistris, F., Fabbrocino, G., Salzano, E., 2016. Evaluating the structural priorities for the seismic vulnerability of civilian and industrial wastewater treatment plants, Saf. Sci., in press.

Pellegrino, A.M., Scotto di Santolo, A., Evangelista, A., Coussot, P., 2010. Rheological behaviour of pyroclastic debris flow. Third International Conference on Monitoring, Simulation, Prevention and Remediation of Dense and Debris Flow “Debris Flow III,” May 24–26, Milan, Italy.

Pellegrino AM, Scotto di Santolo A, Schippa L. An integrated procedure to evaluate rheological parameters to model debris flows. Eng. Geol. 2015;196:88.

PGEC Methodology for probabilistic tsunami hazard analysis: trial application for the Diablo Canyon Power Plant Site. Berkeley, CA: Pacific Gas & Electric Company; 2010.

Rikitake T, Aida I. Tsunami hazard probability in Japan. Bull. Seismol. Soc. Am. 1988;78:1268.

Salzano E, Basco A, Busini V, Cozzani V, Renni E, Rota R. Public awareness promoting new or emerging risk: industrial accidents triggered by natural hazards. J. Risk Res. 2013;16:469.

Salzano E, Garcia-Agreda A, Di Carluccio A, Fabbrocino G. Risk assessment and early warning systems for industrial facilities in seismic zones. Reliab. Eng. Syst. Saf. 2009;94:1577.

Salzano E, Iervolino I, Fabbrocino G. Seismic risk of atmospheric storage tanks in the framework of quantitative risk analysis. J. Loss Prev. Process Ind. 2003;16:403.

Suppasri A, Mas E, Charvet I, Gunasekera R, Imai K, Fukutani Y, Abe Y, Imamura F. Building damage characteristics based on surveyed data and fragility curves of the 2011 Great East Japan Tsunami. Nat. Hazards. 2013;66:319.

Thio HK, Somerville P, Polet J. Probabilistic Tsunami Hazard in California. Pacific Earthquake Engineering Research Center, PEER Report 2010/108. Berkeley, CA: University of California; 2010.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.74.211