Chapter 15

Integrating Human and Organizational Factors
into the BCD Risk Analysis Model: An
Influence Diagram-based Approach 1

,

 

 

 

15.1. Introduction

With technological advances and the complexity of industrial systems, it is necessary to examine security in these systems also considering their human operators. As such, the study or analysis of risk in industrial systems has developed over recent years. Initial research into risk analysis has focused more on considering technical aspects, but this has resulted in an increase in serious incidents (such as the nuclear disaster in Chernobyl in 1986 and the 1984 Bhopal disaster). This has, therefore, highlighted the need to develop an integrated risk analysis approach considering technical, environmental, human, and organizational aspects together.

In this section, we examine research carried out within the SOMAIR Project1 that aims to provide a detailed study of human factors in risk analysis based on research (undertaken by project partners EDF Energy and CRAN2) into integrated risk analysis [LÉG 09, LÉG 08] and research into analyzing human aspects in a human–machine system, particularly within the framework of the benefit-cost-deficit (BCD) model by another project partner, LAMIH3 [POL 09, PAC 07, ROB 04, VAN 11]. The different partners in the project have a common scientific interest, notably in the evaluation of ways of limiting risks, i.e. barriers that are a means of avoiding the occurrence of a serious incident and limiting its consequences [HOL 04, POL 01].

The risk analysis model developed by EDF and CRAN [LÉG 09, LÉG 08] integrates the four technical, environmental, organizational, and human dimensions. A representation of this model is based on a Bayesian network to indicate information relating to each of these dimensions. In terms of the organizational dimension, several factors (known as pathogenic organizational factors (POF) related to the system's organization) have been examined. In terms of the human aspect, several factors (or items) are identified relating to the characteristics of human actions. In terms of the technical aspect, the operation of defense barriers and the causes, which may also include environmental factors, and consequences of serious incidents are studied. This dimension is based on the combination of an inductive and a deductive (node-butterfly) method. When organizational factors are pathogenic, they have a negative impact on the characteristics (items) of human actions, which, in turn, can have negative consequences for their performance. Human actions are ineffective when they fail to influence the operation of one or several barriers. When these barriers are not available (non-operational), this can cause one or several serious incidents. The original aim of the proposed Bayesian network-based representation consists of identifying the occurrence of a serious incident accounting for all the knowledge and factors from each dimension.

The other partner in the project, LAMIH [POL 09, ROB 04], has developed a model called BCD. This model uses the behavior and opinions of human operators when faced with barriers (voluntarily infringing barriers) in order to achieve objectives in productivity, workload, time, etc. Since the purpose of any barrier is to protect the system and the operator, the BCD model also allows us to evaluate costs and risks that may cause barriers to be broken. The BCD model allows us to compare action plans from a multicriteria approach. The benefit corresponds to the improvement between these two plans, while the cost corresponds to loss and the defect corresponds to losses likely to occur in the case of failure (undesirable consequences). The BCD model has also been studied for developing pattern recognition tools (case-based reasoning, neural networks) and allows us to predict the behavior of human operators in relation to barriers. These studies have also been carried out within the context of vehicle behavior as well as guided transport [POL 06]. The BCD model therefore allows us to evaluate the two (human operators and barriers) in terms of effectiveness.

Probabilistic graphic models such as Bayesian networks [JEN 96, NAI 07] and influence diagrams [OLI 90] are often used in risk analysis because they have several advantages: processing data taken from feedback and others provided by experiment opinions, handling simulations and diagnostics in the same model, modeling different types of information (discrete, continuous, symbolic, etc.), representing uncertain and incomplete information, and facilitating the design of a risk analysis model using existing tools such as Netica and Bayesia.

In this section, we propose an approach that allows us to analyze human actions. This approach is based on the BCD model, which integrates human and organizational factors that may affect the actions of human operators and the system. In the proposed approach, each action is evaluated in terms of its benefit and/or risk in the presence of multiple criteria (such as productivity, security, workload, and time). The approach also allows us to analyze the system's operational process (the measures taken in the case of high productivity, for example). To account for all these elements and to design the final model, we will use influence diagrams. This choice is motivated by the fact that influence diagrams contain chance nodes, as in the case of Bayesian networks, and decision and utility nodes. In effect, influence diagrams allow us to classify the different alternatives (i.e. actions or decisions) with the least to the most amount of risk.

The remainder of the chapter is organized as follows: section 15.2 examines the BCD model and section 15.3 presents our own analysis model for human behavior. In section 15.4, we illustrate the proposed model using an example of an industrial printing press. Finally, section 15.5 concludes the chapter.

15.2. Introduction of the BCD (benefit-cost-deficit) approach

Human operators are always present in the running of a system, whether at the design stage or at the operation phase [VIL 92]. Their roles are often challenged by new technologies and productivity and security targets [VAN 03]. Their actions can be considered not only as a source of performance and security [HOL 06], but also as a potential source of error [REA 90], and, paradoxically, the human operator is one of the main causes of the lack of security. For example, as cited in [TRU 08], statistics from the Transportation Safety Board of Canada show that in 74% of cases accidents at sea are due to human factors (such as errors in understanding between the pilot and the captain and the lack of communication).

Therefore, human operators work within a system with intentions, aims, and with their own abilities, intelligence, pressures, tiredness, lack of motivation, etc. All these factors and external pressures can cause deviations with regard to commands and the system's operation. These types of deviations are commonly observed events. For example, in 2003, there were 20 million infractions of the traffic law in France alone (right of way, traffic lights, speed limits, etc.) [DJE 07].

The Benefit-Cost-Deficit (BCD) model [POL 09, PAC 07, POL 01] has been developed to study deviating behavior in human operators when faced with barriers, i.e. ignoring them (infringing barriers). The implementation of barriers by a system's human-machine designer protects it from failures and harmful consequences and ensures the system's security and that of the human operator (see Figure 15.1).

Figure 15.1. Infringing barriers [POL 02]

ch15-fig15.1.jpg

DEFINITION 15.1.– A barrier is defined as an obstacle, obstruction, or difficulty, which may (1) indicate the execution of an action or the occurrence of a serious event or (2) predict or reduce the impact of its consequences [HOL 99].

There are different types of barriers:

Material barriers: These are physical barriers that protect the system from hazardous actions by human operators. They do not need to be noticed or interpreted by the human operator to accomplish their purpose, for example metal grids that prevent access to high-tension zones.

Functional barriers: These are barriers that cause an action to be carried out establishing, for example, a logical or temporal dependence in order to activate them. These barriers do not need to be seen by the human operator but require the presence of preconditions that must be verified before obtaining a result, for example autopilot mode in trains.

Symbolic barriers: These are barriers that require interpretion in order for a human operator to react or respond to the messages they contain, for example signs and posters indicating the need to wear a helmet on construction sites.

Immaterial barriers: These are barriers that are not necessarily present or representable in the work environment but need to be recognized by the operator in order to be activated, e.g. operational procedures.

According to [POL 09, PAC 07, POL 01, POL 02], an infringement of a barrier is an intentional behavioral drift whose consequences can be analyzed following three parameters (benefit, cost, and deficit):

Immediate benefits (B): These represent associated gains (sought by the human operator) at the infringement of the barrier.

Immediate costs (C): These are acceptable losses for the operator in order to reach anticipated benefits.

Potential deficits (D): These are unacceptable losses due to a possible failure or due to the lack of respect for a barrier.

An action A of an infringement or respect for barriers when it occurs can lead to two kinds of situations (consequences):

Success: This refers to a situation where the execution of an action is correct and the consequence is written as CS(A). This is the consequence sought by the operator.

Failure: This refers to a situation where the execution of an action is incorrect and does not cause a desired result. This consequence will be written as CF(A).

The consequences of an action correspond to changes in the system's states where the actions are carried out. They can be evaluated according to several criteria: the task's duration, workload, quality, productivity, etc.

When the human operator has to decide whether to respect or ignore a barrier, two actions must be evaluated:

– The prescribed action that corresponds to the respect of the barrier. This action will be denoted by P.

– The action resulting from an infringement of the barrier. This will be written as FB.

To determine the benefits, costs, and deficits [POL 01, POL 02] has compared the consequences of prescribed behavior and deviating behavior. The comparison is as follows:

Bi(FB) (respectively Ci(FB) and Di(FB)) represents the benefits (respectively the costs and deficits ) related to the infringement of the barrier for each criterion i:

image

Figure 15.2 illustrates the calculation of consequences for the infringement of barriers [POL 02].

Figure 15.2. Evaluation of benefits, costs and deficits

ch15-fig15.2.jpg

The BCD model has also been used to solve several problems such as the control and analysis of human operator behavior within a system [VAN 10, PAC 07, ROB 04, VAN 03], biomechanical applications [ROB 06], prediction human reactions to barriers, and identification of errors using pattern recognition tools (neural networks and case-based reasoning) [ZHA 04, VAN 09].

EXAMPLE 15.1.– An example that explains the benefits, costs, and deficits likely to be obtained during deviant behavior is an egg [SAV 54]. A user has a bowl with an omelet made using five eggs with a sixth whose state (whether good or bad) is unknown and an empty cup. Suppose that the problem involves a restaurant with specific objective to be achieved: to satisfy clients by assuring a minimal waiting time and a good omelet (these last two criteria considered in this example are time and production). Since there is uncertainty concerning the state of one egg, the prescribed behavior concerning this problem is to first crack the egg into the empty cup (P) to verify whether it is good or bad. Directly cracking the egg into the bowl that already contains five eggs (FB1) or throwing it away (FB2) constitutes deviating behavior by the user (the user may not respect the prescribed behavior because he/she also has objectives to achieve (avoiding washing a cup, for example)) or due to other pressures. The decision by the user therefore concerns a choice among the three following actions:

1) Crack the sixth egg into the empty cup to check its state (good or bad).

2) Crack the sixth egg directly into the bowl.

3) Throw the egg away without cracking it.

Table 15.1. Example representation of a problem using the BCD model

States of the egg
Actions Good egg Bad egg
Crack the egg (P) Six-egg omelet with a cup to wash Five-egg omelet with a cup to wash
Crack the egg into the bowl (FB1) Six-egg omelet Spoilt omelet
Throw the egg away (FB2) Five-egg omelet (one egg wasted) Five egg omelet

Table 15.1 shows the consequences of each action according to its benefit, cost, and deficit.

Concerning this problem, when the prescribed behavior actions are successfully carried out, the consequences of the production concerning a six-egg omelet (CS_Prodution = 6 eggs) and the time required to make the omelet is 10 min (CS_Temps = 10 min). When the infringment (or deviation from the user's prescribed behavior) fails, the consequences of the production are a five-egg omelet (CF_Prodution = 5 eggs) and we suppose that we will not have to remake the omelet (CF_Temps = 10 min). This indicates that, concerning the prescribed behavior P, there are 0 benefits, 0 costs, and 0 deficits in the two cases. However, for the deviant component (FB1), when the egg is good, we have a benefit in terms of time (2 min gained if the user cracks the egg directly into the bowl and therefore there is no cup to wash). However, if the egg is bad, this action (FB1) means that the user loses the whole omelet (deficit = −5 eggs). Concerning the action (FB2), when the egg is good, the consequences are a production cost (one good egg lost) and a benefit in terms of time (= 2 min by throwing away the egg). This indicates that deviant behaviors have benefits if their positive consequences outweigh those of prescribed behavior in the case of success (i.e. a benefit if the production involves more than six eggs and the time is less than 10 min) and provide costs and/or deficits if their negative consequences are worse than those of the behavior described in case of failure (i.e. cost and/or benefit if the production concerns fewer than five eggs and the time is less than 10 min).

15.3. Analysis model for human actions

The decision to infringe or respect one or several barriers generally relies on the human operator. His/ her decision is motivated by specific objectives that he/she wants to carry out such as an increase in productivity and time gains. These objectives are related to his/her motivations or personal intentions as well as the organization (where, depending on the system, there are priorities concerning the attained objectives such as security, productivity, and quality). Factors such as these relating to the organization and others relating to human operators must be taken into account in the analysis process for human actions.

Table 15.2. Problem of an omelet (consequences in terms of BCD)

States of the egg
Actions Good egg (CS) Bad egg (CF)
P CS_Prodution = 6 eggs
CS_Temps = 10 min
CF_Production = 5 eggs
FB1 CS_Prodution = 6 eggs
CS_Temps = 8 min (time benefit)
CF_Prodution = –5 eggs
(deficit production)
FB2 CS_Prodution = 5 eggs (production)
CS_Temps = 8 min (time benefit)
CF_Prodution = 5 eggs

As such, our proposition consists of extending the BCD model by introducing several organizational and human factors influencing operators' actions. We have proposed a model based on the structure of an influence diagram, an extension of a Bayesian network, to represent and analyze the actions of a human operator in a system. The factors considered can be defined according to the probability being studied and the objectives of the system in question. The approach also accounts for the presence of multiple criteria such as productivity, security, and workload. In the proposed model, organizational and human factors that have been defined within the integrated risk analysis developed by the partners of the SOMAIR project (EDF Energy and CRAN) are considered [DUV 08, LÉG 08, LÉG 09]. The final model allows us to calculate the benefit of an action. The benefit value of a prescribed action must be zero (which signifies that if all the barriers are respected by human operators, the consequences in relation to each criterion correspond to zero benefits, costs, or deficits). If the benefit of the deviated action (or infringed barrier) is greater than that of the prescribed action, there is a benefit on one or several criteria; otherwise, there is a risk (deficit). The objective of developing this kind of model lies in the ability to analyze human actions, notably those that concern the influence of different factors affecting the efficacy of actions.

15.3.1. Accounting for organizational and human factors

The aim is to consider organizational factors in risk analysis and identify factors likely to affect the actions of human operators and cause negative consequences. These factors can include production, security, etc. In [LÉG 08, LÉG 09], seven organizational factors are defined in order to study the risk analysis in industrial systems. These factors include weakness in the organization’s safety culture (OSC), failure in day-to-day safety management (DSM), weakness of monitoring organizations (MO) (or watchdogs), poor treatment of organizational complexity (OC), difficulty in implementing feedback (FB), production pressures (PP), and failure to reexamine design hypotheses (DH).

Human factors can influence the efficacy of operators’ actions. They can include experience, training, lack of respect for regulations, feedback, etc. When an organizational factor has a pathogenic (or dangerous) effect, e.g. when there are production pressures, this can impact the actions of human operators (lack of concentration by users, for example). In the proposed model, we consider the human factors proposed by [LÉG 08, LÉG 09], which are as follows: delegation (De), aids (Ai), training (Tr), experience (Ex), the possibility of respecting guidelines (RG), contextual factors (CF), dynamic management and group collective (DMGC), management and achievement of objectives (MAO), and feedback (Fee).

Before examining the proposed model, we will provide a brief analysis of influence diagrams.

15.3.2. Influence diagrams

An influence diagram (ID) [OLI 90] is a directed acyclical graph (DAC) commonly used to model decision problems in the presence of uncertain information. An ID contains three types of nodes. The first is decision nodes that represent the decisions of the problem studied. The second is chance nodes that represent the random variables. If the variables do not rely on any variable, then we introduce a priori information (i.e. the a priori probability of the state “raining” is less than 0.7). In the opposite case, information is shown as conditional probability tables that are introduced manually or automatically using mathematical, logical, or other functions. Finally, the utility nodes (represented by diamonds) represent the utility value for a decision. Note that the domain of each variable is not always binary (several states can be associated with a variable). Also note that an influence diagram is an extension of a Bayesian network that only contains chance nodes.

The functioning of the influence diagram consists of propagating a set of known information toward a set of desired variables in the designed model. The calculation of a posteriori probabilities and utilities (or inference) can be carried out on any set of variables in the model for both observations and important values. For the evaluation (inference) of an ID, there are several algorithms [COO 88, SHA 92].

Figure 15.3 shows an example of a simple ID illustrating the situation of taking or leaving behind an umbrella when going out [JEN 96].

The ID in this example contains two chance nodes (forecasts and weather), with probabilistic information about them. The rain decision is a decision node with two states (take an umbrella and leave the umbrella at home). Satisfaction is a utility node that represents the utility of each decision. The aim is to calculate the desired utility for each decision. The one with the greatest amount of use is considered to be the optimal decision.

Figure 15.3. Example of an influence diagram

ch15-fig15.3.jpg

15.3.3. Structure and parameters associated with the risk analysis model

To model a given problem with a probabilistic graphic model such as an influence diagram or a Bayesian network, we need to define its components: (1) a qualitative component that defines the model's different variables and the relationships between these variables (the graph or model's structure) and (2) a quantitative component that defines the local probabilities or benefits associated with the model's variables (the model's parameters).

Figure 15.4 illustrates the proposed risk analysis model's structure.

The model is composed of several variables that can be described as follows:

– variables representing the decisions to infringe or respect barriers (two states associated with each variable: respected and infringe);

– variables concerning organizational factors;

– variables representing human factors;

– variables indicating the probability of success or failure for human actions (infringement or respect of one or several barriers);

– variables representing different criteria (e.g. quality, productivity, time, and workload that are associated with a weight that shows the significance of each criterion for the organization);

– several variables representing the consequences of infringing barriers in terms of benefits, costs, deficits, and utility.

Figure 15.4. Structure of the proposed risk analysis model

ch15-fig15.4.gif

When designing the risk analysis model, each of the variables examined above corresponds to a node in the influence diagram. The variables representing organizational and human factors, representing the importance of criteria, and associating with the consequences of actions (benefit, cost, deficit) are represented as chance nodes in the influence diagram. Human actions concerning the infringement or respect of barriers are represented by decision nodes. The utility node (U) quantifies the value of each action (infringement of one or several barriers). When the utility value of an action is positive, it signifies that the action carries a benefit; otherwise, the action can be said to be hazardous.

Each organizational factor takes two potential states (present and absent) with a probability for each state, and this is the same for each human factor (present and damaged). The significance of each criterion (symbolized by Impi) is evaluated in an interval [0.1] (see Table 15.3).

The variables allowing us to evaluate the consequences for each action are defined as follows:

Benefit (Bi), cost (Ci), and deficit (Di): These variables are defined in section 15.2 (BCD model). Each decision to infringe or respect a barrier is evaluated in relation to each criterion, i.e. each criterion is allocated with three probability tables (benefit, cost, and deficit). The values associated with benefit, cost, and deficit are in an interval [0.1] (see Table 15.4);

Total benefit (Bt): This variable represents the overall benefit in relation to all the criteria. Bt is calculated as follows:

images

where Impi is the value of importance associated with each criterion i.

Total cost (Ct): this variable represents the overall cost in relation to all the criteria. Ct is calculated as follows:

images

where Impi is the value of importance associated with each criterion i;

Total deficit (Dt): This variable represents the overall deficit by considering all the criteria. Dt is calculated as follows:

images

Utility (U): This variable represents the risk or benefit of the decision taken (infringement or respect for barriers). U is calculated as follows: U = BtCtDt.

The values associated with the overall benefit, cost, and deficit are located in one interval [0.1] (see Table 15.4).

Table 15.3. Values associated with the importance criterion

Evaluation Importance criterion (Impi)
0–0.2 Not important
0.2–0.4 Not very important
0.4–0.6 Average
0.6–0.8 Important
0.8–1 Very important

Table 15.4. Values associated with the consequences of human actions

Evaluation Benefit (cost and deficit, respectively) Overall benefit (cost and deficit, respectively)
0–0.2 Zero Zero
0.2–0.4 Weak Weak
0.4–0.6 Average Average
0.6–1 Heightened Heightened

15.4. Example application

15.4.1. Description of the case study: industrial printing presses

In this chapter, we will examine an example application in an industrial printing press [POL 02, VAN 03]. An industrial printing press is generally composed of operational blocks distributed over three levels. At the beginning of each line, a roller feeds the line with paper. It is followed by four printing blocks (black, blue, magenta, and yellow), which create the offset print. The roll of paper is then passed through a drier and a cooler. It is then fed into the folding machine to be cut and folded depending on the type of product or book. Books are then sent to the mock-up room and, if the sample copies are acceptable (from the point of view of the operators), they are sent to the receiver; otherwise, they are discarded. The minimum number of people required to run this dual line is four: two machine operators, a winder, and a receiver. The role of the paper feeder is carried out at the front of the lines and he/she feeds the rollers with paper. In addition, he/she ensures that the process of sticking between the rollers occurs correctly. The receivers stay at the end of the line to stack the paper, ensuring a consistent supply of palettes, etc. With regard to the machine operators, they work in several control posts from where they can manage lines (color level and superposition, speed, etc.)

The procedures set out by the manufacturer are the following:

1) to use appropriate protective equipment (gloves, safety glasses);

2) press the Print button;

3) press the Emergency stop button;

4) clean all visible surfaces with a sponge and an appropriate cleaning product;

5) dry the surface with a cloth;

6) release the Emergency stop button;

7) press the Maintenance button;

8) repeat steps 3–7 as required.

After preliminary observations [POL 02, VAN 03] the actions carried out by human operators are as follows:

1) press the Print button;

2) continue slow rotation;

3) clean the surface of the blanket using a sponge and a appropriate cleaning product;

4) dry the surface of the blanket;

5) press the Stop button.

After these actions are carried out by the human machine operators, the blanket is washed and dried while in rotation. The barriers that are unobserved by the human operators are as follows: not using protective equipment and interfering with the machine while operating at high speed. This reduces the amount of time required to clean the rollers but exposes the human operators to three hazards: crushing of hands, skin irritation, and getting solvent in the eyes.

15.4.2. Presentation of the model for the test case

In this section, we examine a modelization of this problem using the influence diagram formalism. We will consider the two barriers (means of protecting and intervening in the machine when stopped) and three evaluation criteria (workload, safety, and time). Figure 15.5 shows the model's structure representing the problem of an industrial printing press.

We consider three organizational factors: production pressures (PP), weakness in organizational safety culture (OSC), and weakness in management (WM). We also introduce the five human factors introduced in section 15.3.1. As the model shows (see Figure 15.5), each human action is split into three phases (preparation, implementation, and closing) and each phase has two states (effective and ineffective). Concerning the human factors influencing the efficacy of human actions, for example, when the factors delegation, training, and aids are impaired, there is a higher probability that the action preparation will be ineffective. Taking into account the two barriers (safety measures and intervention in the machine when stopped), decisions or human actions are represented by the decision node. We have four actions:

1) B1_B2: This action represents the respect for the two barriers by the human operator, i.e. using protective equipment and not interfering with the machine while operating.

2) NB1_B2: This action concerns the infringement of the first barrier (not using protective equipment) and respecting the second (not interfering with the machine while operating).

3) B1_NB2: This action concerns respect for the first barrier and infringement of the second.

4) NB1_NB2: This action concerns the infringement of the two barriers.

Figure 15.5. Analysis model for human actions corresponding to the issue of an industrial printing press

ch15-fig15.5.jpg

The aim of our model is to calculate the usefulness of each of these actions and determine the most hazardous or the most beneficial action. The advantage is that this allows us to identify factors causing the deficit or benefit.

In the remainder of this chapter, we present some examples of our model in terms of observations and analysis of the results.

Example illustration 1: In this case, we have introduced observations about three organizational factors that we have considered in our example: COS = Present, OC = Present, and PP = Present. This indicates that there is a weakness in the organizational safety culture and in the management of production pressures. The inference application in the designed model allows us to update the probabilities of other variables and calculate the usefulness of each action. In Figure 15.6, we can see that the three actions (NB1_B2, B1_NB2, and NB1_NB2) correspond to infringement of the first barrier, and the second indicating the two barriers that present a risk (negative utility). The actions are ordered according to their degree of risk as follows (action NB1_NB2 with a risk equal to −0.3073, action B1_NB2 with a risk equal to −0.1211, and the action NB1_B2 with a degree of risk equal to −0.0799). This result shows that the infringement of the two barriers is the most hazardous action. This is due to the fact that the majority of human factors are impaired and each of the preparation, implementation, and closing actions are ineffective at high probabilities.

Figure 15.7 shows the consequences of the action NB1_NB2 with the same observations introduced for the first example illustration. The model in this figure shows that this action has a higher impairment of security (with a probability equal to 0.567) and an average deficit for the criteria of time and workload.

Example illustration 2: In this case, we have introduced observations concerning organizational factors where each is associated with the state absent: COS = Absent, OC = Absent, and PP = Absent. This indicates that there is no weakness in the organizational safety culture or in the management of production pressures. We can see in Figure 15.8 that the activity is carried out successfully (with a probability equal to 0.868). As such, the activity where the barriers are successfully infringed, the three actions (B1_NB2, NB1_B2, and NB1_NB2) present a slight benefit, in particular with regard to workload. This is shown in Figure 15.9 concerning the consequences of the action NB1_NB2 with the same actions introduced for this example illustration.

The model in Figure 15.9 shows that the action NB1_NB2 presents a higher benefit for the two criteria, workload and time, but zero benefit and cost for the safety criterion.

Figure 15.6. Model corresponding to example illustration 1

ch15-fig15.6.jpg

Figure 15.7. Consequences of the action NB1_NB2 concerning example illustration 1

ch15-fig15.7.jpg

Figure 15.8. Model corresponding to example illustration 2

ch15-fig15.8.jpg

Figure 15.9. Consequences of the action NB1_NB2 concerning example illustration 2

ch15-fig15.9.jpg

Figure 15.10. Model corresponding to example illustration 3

ch15-fig15.10.jpg

Example illustration 3: In this case, we have introduced observations about the three different stages: preparation, implementation, closing, that correspond to the state ‘ineffective”. We can see in Figure 15.10 that nearly all human factors are impaired and the activity corresponds to the state of failure with a probability equal to 0.752 and, therefore, all the infringements of one or several of the barriers are hazardous.

15.5. Conclusion

In this section, we have proposed an approach for studying and analyzing human actions on the basis of BCD model. The model allows us to study and evaluate the consequences of deviation from prescribed behavior in the system (infringement of barriers). To design the proposed model, we have used a graphic representation using an influence diagram that is a powerful and effective model for modeling decision problems and accounting for uncertain information. We have also examined human and organizational factors. The advantage of this approach is that it allows us to analyze and predict the behavior of human operators and also to revise the prescribed behavior regulations and the barriers implemented in the system examined. We have illustrated our model with a real example of an industrial printing press where human operators often do not respect the prescribed regulations for cleaning blankets.

15.6. Acknowledgments

This chapter is written within the context of the SOMAIR Project that is funded by the Scientific Interest Group ‘Supervision, Safety and Security of Complex Systems (SIG 3SG)”.

15.7. Bibliography

[COO 88] COOPER G., ‘A method for using belief networks as influence diagrams”, Procceedings of the 12th Conference on Uncertainty in Artificial Intelligence, Misovalko, pp. 55–63, 1988.

[DJE 07] DJELASSI A.C., Modélisation et prédiction des franchissements de barrières basée sur l'utilité espérée et le renforcement de l'apprentissage. Application à la conduite automobile, Doctoral Thesis, University of Valenciennes and Hainaut-Cambresis of Valenciennes, Valenciennes, 2007.

[DUV 08] DUVAL C., LÉGER A., FARRET R., WEBER P., ‘Méthodologie d'analyse de risques pour les systèmes socio-techniques complexes et application à un cas industriel”, 16e Congrès de Maîtrise des Risques et de Sûreté de Fonctionnement, Lambda Mu 16, Avignon, France, CDROM, 2008.

[HOL 99] HOLLNAGEL E., ‘Accident and barriers”, Proceedings of European Conference on Cognitive Science Approaches to Process Control, Villeneuve d'Ascq, France, pp. 175–180, 1999.

[HOL 04] HOLLNAGEL E., Barriers and Accident Prevention, Ashgate, Aldershot, UK, 2004.

[HOL 06] HOLLNAGEL E., WOODS D.D., LEVESON N., Resilience Engineering: Concepts and Precepts, Ashgate, Aldershot, UK, New York, 2006.

[JEN 96] JENSEN F.V., An Introduction to Bayesian Networks, UCL Press, London, 1996.

[LÉG 08] LÉGER A., FARRET R., DUVAL C., LEVRAT E., WEBER P., LUNG B., ‘A safety barriers-based approach for the risk analysis of socio-technical systems”, 17th IFAC Word Congress, Seoul, South Korea, 2008.

[LÉG 09] LÉGER A., Contribution à la formalisation unifiée des connaissances fonctionnelles et organisationnelles d'un système industriel en vue d'une évaluation quantitative des risques et de l'impact des barrières envisagées, Doctoral Thesis, University Henri Poincaré, Nancy 1, 2009.

[NAI 07] NAIM P., WUILLEMIN P.-H., LERAY P., POURRET O., BECKER A., Réseaux Bayésiens, Eyrolles, Paris, 3rd ed., 2007.

[OLI 90] OLIVER R.M., SMITH J.Q., Influence Diagrams, Belief Nets and Decision Analysis, John Wiley, New York, 1990.

[PAC 07] PACAUX-LEMOINE M.-P., VANDERHAEGEN F., ‘BCD model for human state identification”, Proceedings of the 10th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems, Seoul, Korea, September 2007.

[POL 01] POLET P., VANDERHAEGEN F., MILLOT P., WIERINGA P., ‘Barriers and risk analysis”, Proceedings of IFAC/IFIP/IFORS/IEA Symposium on Analysis Design and Evaluation of Human Machine Systems, Kassel Germany, September 2001.

[POL 02] POLET P., Modélisation des franchissements de barrières pour l'analyse des risques des systèmes homme-machine, Doctoral Thesis, University of Valenciennes and Hainaut-Cambrésis, Valenciennes, 2002.

[POL 06] POLET P., CHAALI-DJELASSI A., VANDERHAEGEN F., ‘Comparaison de deux méthodes de prédiction des erreurs humaines en conduite automobile”, Actes de ErgoIA 2006 ‘L'humain comme facteur de performance des systèmes complexes”, France, pp. 193–200, 2006.

[POL 09] POLET P. , VANDERHAEGEN F., MILLOT P. , WIERINGA P.A., ‘Human behaviour analysis of barrier deviation using the benefit-cost-deficit model”, Journal of Advances in Human-Computer Interaction, pp. 10–19, 2009.

[REA 90] REASON J., Human Error, Cambridge University Press, New York, 1990.

[ROB 04] ROBACHE F., MORVAN H., POLET P., PACAUX-LEMOINE M.-P., VANDERHAEGEN F., ‘The benefit-cost-deficit (BCD) model for human analysis and control”, Proceedings of the 9th IFAC/IFORS/IFIP/IEA symposium on Analysis, Design, and Evaluation of Human-Machine Systems, Atlanta, GA, September 2004.

[ROB 06] ROBACHE F., MORVAN H., POLET P., PACAUX-LEMOINE M.-P., VANDERHAEGEN F., ‘The BCD model for biomechanical application”, Proceedings of the 25th European Annual Conference on Human Decision Making and Manual Control (EAM'06), ValenSciences, PUV, Valenciennes, September 2006.

[SAV 54] SAVAGE L.J., The Foundations of Statistics, John Wiley & Sons, New York, 1954.

[SHA 92] SHACHTER R., PEOT M., ‘Decision making using probabilistic inference methods”, Procceedings of the 8th Conference on Uncertainty in Artificial Intelligence, pp. 276–283, 1992.

[TRU 08] TRUCCO P., CAGNO E., GRANDE O., RUGGERI F., GRANDE O.A., ‘A Bayesian belief network approach modelling of organisational factors in risk analysis: a case study for the maritime transportation”, Reliability Engineering & System Safety, vol. 93, no. 6, pp. 845–856, 2008.

[VAN 03] VANDERHAEGEN F., Analyse et contrôle de l'erreur humaine, Hermès, Paris, France, 2003.

[VAN 09] VANDERHAEGEN F., ZIEBA S., POLET P., ‘A reinforced iterative formalism to learn from human errors and uncertainty”, Journal of Engineering Applications of Artificial Intelligence, vol. 22, nos. 4–5, pp. 654–659, 2009.

[VAN 10] VANDERHAEGEN F., ‘Human-error-based design of barriers and analysis of their uses”, Cognition, Technology and Work (Special issue in Honor of E. Hollnagel), vol. 12, no. 2, pp. 133–142, 2010.

[VAN 11] VANDERHAEGEN F., ZIEBA S., ENJALBERT S., POLET P., ‘A benefit/cost/deficit (BCD) model for learning from human errors”, Reliability Engineering and System Safety, vol. 96, pp. 757–766, 2011.

[VIL 92] VILLEMEUR A., Reliability, Availability, Maintainability and Safety Assessment, Volume 1, Methods and Techniques, Wiley, 4 February 1992.

[ZHA 04] ZHANG Z., POLET P. , Vanderhaegen F., Millot P., ‘Artificial neural network for violation analysis”, Reliability Engineering and System Safety, vol. 84, pp. 3–18, 2004.

 

 

1 Chapter written by Karima SEDKI, Philippe POLET and Frédéric VANDERHAEGEN.

1 Human–machine system for integrated risk analysis.

2 Research Center for Automatics Control.

3 Automatics, mechanic, industrial and human technology laboratory.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.99.71