Chapter Four

Human Factors in the Chemical Process Industries

Kathryn Mearns1    Human Factors Consultant
1 Corresponding author: email address: [email protected]

Abstract

Human Factors is a multifaceted discipline incorporating ergonomics, engineering psychology, human–machine interaction, the working environment, and human and organizational factors. This chapter provides definitions of the terms associated with human factors, including human error and performance influencing factors. It then discusses the techniques used to identify the potential for human error in the workplace before discussing some of the performance influencing factors under the headings of individual factors, job factors, and organizational factors. Factors discussed include competence and skills, personality, occupational stress, design of control rooms, and procedures. Above all, the importance of understanding the safety culture or safety climate of an organization or site is stressed, as is the role of management at all levels, i.e., senior managers, middle managers, and supervisors. In conclusion, the chapter advocates integrating Human Factors into the safety management system (SMS). This will ensure the proper design of hardware, software, processes, and procedures to facilitate good practice in terms of operation and maintenance. Training and competence and management of change should also be considered as an integrated part of the SMS. Ultimately, although everyone who works in the process industries is responsible for safety, senior management is accountable for safety. They are the individuals who would be found culpable in a court of law in the event of a major incident as testified by the outcomes of Public Inquiries such as Flixborough, Piper Alpha, Texas City, and Deepwater Horizon.

Keywords

Human error; Human factors; Job factors; Individual factors; Organizational factors

1 Introduction

The chemical process industries encompass a number of different hazardous substances and processes, ranging from the manufacture of pharmaceuticals and industrial chemicals through to processing hydrocarbons into petrochemicals. Nevertheless, some common principles are necessary to prevent the major accident hazard (MAH) risks that are inherent in these industries from being realized. These principles have been covered elsewhere in this volume, and this chapter focuses on how the human element can contribute to major accidents and how contributory “performance influencing factors” (PIFs) can be managed and mitigated.

Of course, the chemical process industries are not just susceptible to MAH risks, i.e., loss of containment, fires, etc. The likelihood of an MAH risk being realized is fortunately very small, and the vast majority of incidents in the chemical process industries tend to be occupational injuries arising from slips, trips, and falls. What is relevant about human factors is that the same personal and system failings can lead to occupational injuries as well as to major accident harm. While it is unacceptable for people to suffer any accident or injury at work, the main focus for most human factors work in the chemical process industries is to prevent major accidents. These accidents have not only the potential to affect the people working directly at the plant but also local populations in residence close by and the physical environment.

Major accidents such as Seveso, Flixborough, Bhopal, and Texas City serve to highlight the costs both in human life and financially. For example, 15 people died and 180 were injured at Texas City and the financial losses were in the order of $1.5 billion. The immediate and underlying causes of this accident are well documented in the US Chemical Safety Board Report (2007). In relation to the “Key Issues” identified as being underlying causes of this accident, it is worth noting that safety culture and human factors are two of the four, the other two being process safety metrics and regulatory oversight. The panel reports that simply focusing on the actions of the frontline operators at the plant and their immediate supervisors misses the point. Tackling the underlying causes of the accident such as a poor safety culture; overfocus on occupational safety indicators rather than process safety indicators; and a cost-cutting regime and lack of incident reporting will have a greater impact on preventing future major accidents. This recommendation is supported by the incident investigations of other major accidents over the decades in a number of safety critical industries.

This chapter is organized in the following way. First, there is an outline of the human factors definitions and terminology, to ensure that readers have a consistent understanding of the terms used throughout the chapter. Second, there is a section on human error and violations (also known as noncompliances/nonconformances), which are often the final triggering actions that can lead to an adverse event. Third, there is a description of the factors that can influence human performance (so-called PIFs), which need to be managed in order to keep the risk of human error as low as reasonably practicable. Finally, techniques used to measure and manage human error and PIFs are discussed throughout the chapter.

2 Human Factors Definitions and Terminology

Human Factors (HF) or Human and Organizational Factors (HOF), as it is sometimes called, is a wide-ranging discipline incorporating elements of psychology, physiology, ergonomics, sociology, engineering, and management science. The discipline is often labeled Ergonomics or Human Factors Engineering, but in reality the scope is much broader than that, with an increasing emphasis on the knowledge, skills, and techniques that characterize the social sciences such as sociology and management studies, i.e., qualitative techniques such as interviews, focus groups, workshops, as well as the traditional quantitative techniques, e.g., Human Reliability Analysis (HRA), Human Error Quantification, and questionnaire survey design and deployment, which have traditionally been associated with human error and human factors measurement and management.

For the purposes of this chapter, the terminology used in research and assessment of human factors is clarified below since many of these terms are used interchangeably and inconsistently in the literature:

1. “Ergonomics” literally means “laws of work” and is the term mainly used in Europe. The term Human Factors originated in the United States but is now used more widely. Ergonomics tends to be associated more with physical workplace assessment, which can also be referred to as Human Factors Engineering.

2. Cognitive Ergonomics or Engineering Psychology emphasizes the study of cognitive or mental aspects of work, especially where there is a high level of human–machine interaction, automation, decision making, and mental workload.

3. Human–Machine Interaction or Human–Computer Interaction is the applied study of how people interact with technology.

4. Working Environment emphasizes the environmental and task factors that influence human performance.

5. Human and Organizational Factors (HOF) emphasizes the organizational aspects that influence human performance such as leadership style, management systems, safety culture and climate, training and competency arrangements, incident reporting systems, behavioral safety, and human resource practices. As such, HOF mainly concerns itself with PIFs, which will be covered later in the chapter.

Members of the human factors community will often specialize in more than one of these subdisciplines and will be familiar with a range of different techniques to measure and manage both human error and PIFs.

The UK Health and Safety Executive (i.e., UK Regulator) considers Human Factors to encompass “environmental, organizational and job factors, and human and individual factors, which influence behavior at work in a way which can affect health and safety” (HSE, 1999). Using the above definition as a basis, the UK HSE has grouped the following human factors issues under each theme (see Table 1).

Table 1

Human Factors Issues According to HSG48 (HSE, 1999)

IndividualJobOrganization
CompetenceTaskCulture
SkillsWorkloadLeadership
PersonalityEnvironmentWork patterns
AttributesDisplays and controlsResources
Risk PerceptionProceduresCommunications

Table 1 provides a useful framework for considering the topic of human factors, although it is often difficult (and indeed inadvisable) to consider these factors in isolation, since they often are interrelated and interact with each other. For example, “competence” could be considered as an individual responsibility, in the sense that the individual should ensure that they are trained and competent for the tasks they are involved. It could also be considered an organizational responsibility, in the sense that the organization should ensure that arrangements are in place for their staff to be properly trained and competent for their roles and responsibilities.

It is not possible to discuss all the issues in Table 1 in great detail; therefore, this chapter will only focus on a few issues under each heading. For individual factors, the focus will be on skills and personality, thereby covering competence and attributes. For job factors the focus will be on displays and controls and procedures, thereby including task and environment. For organizational factors the focus will be on culture and leadership, thereby encompassing resources and communication, both of which are key aspects of leadership.

One of the first points to be addressed is the definition of human error (human failure), which can be caused by many of the PIFs outlined in Table 1.

2.1 Human Error

Professor James Reason defines Human Error as “a generic term to encompass all the occasions in which a planned sequence of mental or physical activities fails to achieve its intended outcome and when these failures cannot be attributed to the intervention of some chance agency” (Reason, 1991, p. 9).

Reason further distinguishes between slips, lapses, and mistakes. Slips are associated with faulty actions, where actions do not proceed as intended, e.g., misreading a display. Lapses are failures of memory, e.g., forgetting to press a switch. Slips and lapses tend to occur during routine tasks in familiar surroundings where the operator may be on “automatic pilot” or attention is captured by something other than the task in hand, e.g., a work colleague distracting the operator with a question. They are categorized as “skill-based” errors and are relatively easy to recover from because the operator will receive direct feedback that their actions have not led to the anticipated outcome.

Mistakes occur when the intended plans for action are wrong in the first place. In other words, intended actions may proceed as planned, but fail to achieve their intended outcome. There are two types of mistake “rule-based mistakes” and “knowledge-based mistakes.” Rule-based mistakes are where an incorrect diagnostic rule is applied. An example of this is where an experienced operator on a batch reactor may have learned diagnostic rules, which are inappropriate for continuous process operations. If an attempt is made to apply these rules to work out the cause of a continuous process upset, a misdiagnosis can occur, leading to an inappropriate action. There is also a tendency for people to apply strong but successful rules that they have applied in the past, even if they are not appropriate for the situation.

Knowledge-based mistakes occur when the information-processing capabilities of the operator are being tested by an unfamiliar situation that has to be worked out from first principles and there are no apparent rules or procedures to deal with the situation. Quite often, only information that is readily available will be used to evaluate the situation or people will depend on a “gut feel” that their course of action is correct, perhaps based on similar but unrelated incidents in the past. Sometimes “attention capture” occurs where the operators become focused on one small part of an overall problem, e.g., at Three-Mile Island. In other situations, operators might switch their attention between one task and another, thereby not really solving the problem.

Mistakes are difficult to detect and are more subtle, complex, and dangerous than slips. Detection may depend on someone else intervening or unwanted consequences becoming apparent.

The final type of “error” is a violation or nonconformance/noncompliance. These are considered “intentional” because the operator deliberately carries out actions that contravene the organization's policies, rules, or safe operating procedures. However, violations often tend to be well intentioned, either because their objective is to complete a task or simplify it. Fortunately, violations are very rarely conducted in order to sabotage a process or a plant.

It is important to note that violations can occur due to both internal and external pressures on the operator (i.e., pressure to get a task done on time, which can come from the individual or it can come from perceived pressures from supervisors and managers). Violations can be routine, situational, optimizing, or exceptional.

Routine violations refer to when violations become the norm, i.e., what is normally done in the workplace, learned by others therefore becoming embedded as the way of doing things. They are usually shortcuts taken to get the job done more quickly, more efficiently, or more easily. Unless these types of violation are monitored and controlled, the organization can develop a culture that tolerates violations. Ways of counteracting routine violations are through good supervision, proper training (with explanations of why certain procedures are in place), good procedures and work practices, and, as a final measure, behavioral safety programs, which coach to reinforce the correct behavior and challenge and change incorrect behavior.

Situational violations tend to occur when there is a gap between what the rules or procedures say should be done, and what is actually available in order to get the job done. For example, a lack of trained and competent staff to conduct a task or lack of procedural clarity can lead to a situational violation. This can often occur when management are unaware of what resources are required, when procedures are out of date, or when there has been a cost-cutting program and staff have been made redundant, thus leading to reduced manning levels. Obviously, the same measures that are applied to manage routine violations can be used to manage situational violations.

Optimizing violations refer to when individuals carry out an activity for personal gain or simply for “kicks,” e.g., seeing how far they can go by testing the system to its limits. However, organizations often provide incentives such as bonuses for meeting production targets, which can encourage “organizational” optimizing violations. If brought out into the open through incident/near miss reporting programs and good communication up and down the organizational hierarchy, these types of violation can help to identify measures that can be taken to improve both production and safety within the organization.

Exceptional violations occur when there is an unusual or unanticipated situation where no rules or procedures apply or where the rules/procedures cannot be applied. Perhaps the most poignant example of an exceptional violation can be seen during the Piper Alpha disaster, where personnel had been told that jumping off the platform into the sea was not survivable. In reality, many of those who jumped into the sea that night did survive unlike those who stayed on the platform and made their way to the muster point in the accommodation block as the procedures and their training dictated where they perished.

This example serves to remind us that human beings do not always follow rules and procedures blindly but are capable of interpreting and adapting to situations and solving problems in situ. This inherent behavioral flexibility and adaptability is what keeps us safe in a complex and constantly changing environment, an issue that we will return to later in the chapter.

Human error can be seen as a consequence or outcome of our performance limitations, and we regularly make errors on an everyday basis. Even highly trained and competent individuals working in control rooms and on maintenance tasks within the chemical process industries are error-prone, but fortunately, most errors are captured before they develop into something more serious. There are a number of challenges facing the human factors specialist. One challenge is to design plant, equipment, and systems, which make the chance of human error very low or As Low As Reasonably Practicable (ALARP), i.e., at a point where the costs are not so prohibitive that the benefits are seriously undermined. Another challenge is to identify where human error is most likely to occur, what type of error might occur, and what is the likelihood of it occurring. A third challenge is to measure human error and identify what factors make it more likely to occur (the so-called PIFs). Finally, once error has occurred, it would be of interest how these errors are identified and recovered from.

The remainder of this chapter covers how these many challenges can be identified and overcome. Due to the diverse and extensive nature of the human factors subject matter, not all of the factors identified by the HSE or other international bodies will be reported here. Those areas in which the author has relevant knowledge and expertise will be the main focus of attention. As a result, most of the discussion will be centered on human performance issues and will cover individual and organizational factors, since these are the areas in which the author has conducted most of her research. Ergonomics and Human Factors Engineering, particularly regarding workplace design, human–machine interaction, and characteristics of the working environment, are not areas that the author is familiar with and therefore will only be discussed in passing. Nevertheless, it is worth noting that “Designing for Humans” (Noyes, 2001) is an important exercise and can potentially prevent the development of many of the human performance issues that will be discussed in this chapter.

2.2 Measuring and Managing Human Error

In a paper prepared for presentation at the Institute of Chemical Engineers (IChemE) XX Hazards symposium in April 2008, Visscher of the US Chemical Safety Board presented summaries of investigations of some of the 50 chemical-related incidents in the United States since 1998. In his conclusions, he notes that many of the incidents occurred in facilities where chemicals are stored and used for other purposes rather than at chemical processing companies per se. He also observes that the controls and safeguards that rely on human judgment and reliability are revealed as a particular area of vulnerability and that management should focus on these issues in their operations. Furthermore, Visscher reports that the major accidents that have occurred at the larger companies have highlighted the important role of corporate leadership and oversight in assuring process safety integrity. Corporate leadership and oversight and the important role they play in maintaining safety are explored later on in this chapter.

The chemical process industries use a number of techniques for measuring and managing human error/human failure. The Center for Chemical Process Safety (CCPS) (1994) has produced a set of guidelines for preventing human error in process safety, to which the interested reader is referred.

This section will only deal with a few of these techniques, but it is also to remember that it is difficult to estimate when and where human failure is likely to occur and the most effective way to prevent errors is to focus on managing the PIFs in general. Thorough and detailed incident investigation, with a strong focus on human and organisational factors, is important to identify which PIFs should be focused on in order to prevent further incidents. In the author's experience the following PIFs are the most commonly identified contributory factors in major accidents: poor design, poor procedures, lack of supervision, fatigue, poor safety climate/culture and lack of safety leadership, and underinvestment by senior management in safety improvements.

2.3 Human Reliability Analysis

Human factors specialists attempt to measure the likelihood of human error in predetermined situations through HRA and Human Error Probability (HEP). HRA techniques are used to support the minimization of risks associated with human failure. They are both quantitative (e.g., HEP) and qualitative (e.g., safety critical task analysis (SCTA)) in nature; however the application of quantitative techniques can be difficult due to the fact that HEPs in particular are often used without sufficient justification. In particular, new processes and new technologies will not have sufficient data available to generate HEPs. SCTA (see Energy Institute, 2011) considers the impact that PIFs can have on the likelihood of error, which will be affected by job and organizational factors such as design of plant and equipment, the quality of procedures, and the time available to get the job done. Using HEPs without task context can therefore lead to inaccuracies in analysis.

Henderson and Embrey (2012) produced guidance for the Energy Institute on Quantified Human Reliability Analysis (QHRA) (see Energy Institute, 2012), in order to reduce instances of poorly developed or executed analyses. These authors recommend an eight-stage Generic HRA process as follows:

1. Preparation and problem definition

2. Task analysis

3. Failure identification

4. Modeling

5. Quantification

6. Impact assessment

7. Failure reduction

8. Review

Bow-tie diagrams have become a popular way of illustrating how initiating and response failures can occur. For example, Henderson and Embrey (2012) use the following figure to show how different human failures can affect the initiation and mitigation and escalation of a hypothetical event (see Fig. 1).

f04-01-9780128115473
Fig. 1 Examples of the potential impact of human failures on an event sequence. From Henderson, J., & Embrey, D. (2012). Quantifying human reliability in risk assessments. Petroleum Review.

From a practical point of view, a number of factors can undermine the validity of an HRA. As a starting point, the analyst will need a thorough understanding of the task and the environment it is conducted in. Therefore, the input of skilled and experienced operators will be required. A walk-through of the tasks and subtasks involved in the activity at the location and/or a detailed talk-through of the tasks and subtasks in a task analysis workshop is necessary. It is important to understand which PIFs might be exerting an influence in the actual working situation. Also if any HEPs have been imported (usually from a general database if such a database exists), the rationale from including these HEPs must be clearly articulated. It is normal practice to use a set of guidewords to identify the potential failures.

There are a number of guides to HRA (see Health and Safety Executive, 2009; Kirwan, 1994). The HSE (2009) report RR679 provides a review of human reliability assessment methods. Out of a total of 72 human reliability tools identified, the report authors considered 35 to be potentially relevant to hazardous industries such as the chemical process industries. This list was then reduced to 17 tools, most of which had only been applied in the nuclear industry. Only five of these tools were considered to have “Generic” capability, although some of the nuclear tools were considered to have wider application.

The lack of space precludes detailed discussion of all 17 HRA tools covered in RR679, and the interested reader is advised to access the report (which is available on the Health and Safety Executive website, for further details). Two of the tools are discussed here based on the fact that they are generic or have the potential to be applied more widely than in the nuclear sector: Human Error Assessment and Reliability Technique (HEART) and Technique for Human Error Rate Prediction (THERP). In addition, HEART and THERP are two of the few HRA methods that have been empirically validated (see Kirwan, 1996; Kirwan, Kennedy, Taylor-Adams, & Lambert, 1997).

2.3.1 Human Error Assessment and Reliability Technique

Williams (1985, cited in HSE, 2009) is attributed as being the first to refer to HEART in a series of conference papers. According to the review of HRA methods in RR679, HEART has been applied across a number of high-hazard industries where human reliability is critical, including the chemical process industry. It is designed to be a relatively quick to apply and is easily understood by both human factors specialists and engineers. There are nine Generic Task Types (GTTs) described in HEART, each with an associated HEP and 38 Error Producing Conditions (EPCs). EPCs affect task reliability, each with a maximum amount by which the nominal HEP can be multiplied. The key stages of HEART are the following:

 classify the task for analysis into one of the nine GTTs;

 assign the nominal HEP to the task;

 identify which EPCs may affect task reliability;

 consider the proportion of effect for each EPC;

 calculate the task HEP.

There are a number of premises that have to be taken into consideration when the technique is applied: (1) human reliability will be dependent upon the task to be performed; (2) this level of human reliability will tend to be achieved with a given likelihood within probabilistic limits in perfect conditions; (3) since perfect conditions rarely exist, human reliability will degrade as a function of the extent to which EPCs apply. It should be noted that the total probability of failure should never be more than 1.00, so if the multiplication of factors goes above 1.00, the probability of failure can only ever be assumed to be 1.00.

2.3.2 Technique for Human Error Rate Prediction

Swain and Guttman (1983) developed THERP for the US Nuclear Regulatory Commission (NRC). According to Kirwan (1994), THERP is a total methodology for assessing human reliability and, like HEART, it has also been validated by Kirwan et al. (1997). The THERP handbook prepared by Swain and Guttman (1983) for the NRC presents methods, models, and estimated HEPs to allow analysts to make either quantitative or qualitative assessments of human errors in nuclear power plants. It includes task analyses, error identification, and quantification of HEPs. Although THERP was developed for and has been used extensively in the nuclear industry, it has also been applied to the offshore and medical sectors and would no doubt also have applications in the chemical process industries. RR679 (HSE, 2009) outlines the key steps for applying THERP as:

 Decomposition of tasks into elements

 Assignment of nominal HEPs to each element

 Determination of effects of PSF on each element

 Calculation of the effects of dependence between tasks

 Modeling in an HRA event tree

 Quantification of total task HEP

Estimating the overall probability of failure involves summing the probabilities of all failure paths in the event tree. Summing only the primary failure paths and ignoring all the success limbs, when all the HEPs are 0.01 or smaller, can give an approximation of failure, expressed as an equation.

In conclusion, the human factors community has developed a range of different techniques to assess human reliability across a range of high hazard industries. The HSE report RR679 presents a review of 17 of these methods out of the total 72 identified. Most of the HRA tools have been developed for the nuclear industry; however, five of these tools were considered to have “Generic” application having been used in other industries, including the chemical process industry. It is worth noting that there have been three generations of HRA tools and the selection of an appropriate tool may be dependent upon the maturity of the site where it is being applied. For sites that are just attempting to quantify the risk of human error for the first time, first-generation tools may be the most useful. They may not be able to give insight into dependency or errors of commission, but they will be able to give a fundamental insight into the issue of human error. HEART and THERP are examples of these first-generation tools. Second-generation tools are more appropriate for more “mature” sites, with a long tradition of applying the first-generation tools but that want to understand context and errors of commission when predicting human error. CREAM, ATHEANA, and MERMOS are examples of second-generation tools, but they do not seem to be widely applied and they have yet to be empirically validated. Third-generation tools are now being developed using first-generation tools such as HEART as a basis. It is the current author's belief that if an organization does not have a full understanding of the context under which tasks are executed, there is little chance of accurately assessing human reliability and therefore the impact of PIFs should be the foremost consideration in the application of any tool.

2.4 Safety Critical Task Analysis

SCTA describes a process whereby the impact of potential human error on MAHs can be assessed. SCTA is an extension of Task Analysis, which is the study of the actions and mental processes an employee is expected to carry out in order to achieve a goal. Task Analysis can be used for activities such as assessing staff levels and improving training programs; however, it is not discussed in any detail in this chapter. For the interested reader, there are some excellent resources available, e.g., Kirwan and Ainsworth (1992) and Shepherd (2001). The purpose of SCTA is to identify where human failure can contribute to MAHs and can include initiating events, prevention and detection, control and mitigation, maintenance tasks, and emergency response. The process involves identifying which tasks on a site are safety critical in relation to MAHs, understanding whether human error or violations might contribute to initiating an adverse event, and understanding what preventative measures or layers of protection could be put in place to reduce the likelihood or mitigate the consequences of human failure.

A number of publications exist to support the implementation of SCTA. The Energy Institute (2011) has developed a clear set of guidance on SCTA and the UK Health and Safety Executive has produced a guidance paper for its inspectors (identifying human failures, HSE Core Topic 3), describing a seven-step approach for SCTA. This seven-step approach consists of the following:

1. Identify the main hazards—e.g., from HAZIDs, Safety Reports, or Risk Assessments.

2. Identify safety critical tasks associated with those main hazards and prioritize those tasks where there are many MAHs. Procedures and discussions with staff are the main techniques recommended.

3. Understand the safety critical tasks, i.e., who does what, when and in what sequence? Again this can be derived from procedures, checklists, interviews with staff (walkthroughs/talkthroughs), and observations of staff conducting the tasks.

4. Represent the safety critical tasks, i.e., through breakdown of the tasks in tables or diagrams in sufficient detail for further analysis.

5. List all the potential human failures and their consequences, through representing the safety critical tasks from step 4. Also, list the potential PIFs that could influence human performance for each task and consider what safety measures are already in place. The different types of human failure/error and PIFs used in SCTA are listed in Tables 2 and 3.

Table 2

Human Error Guidewords for Use in SCTA (Energy Institute, 2011)

Action failuresChecking failures
Operation omitted
Operation incomplete
Operation mistimed
Operation in wrong direction
Operation too long/short
Operation too little/much
Operation too fast/slow
Operation too early/late
Operation in wrong order
Right operation on wrong object
Wrong operation on right object
Misalignment
Misplacement
Check omitted
Check incomplete
Right check on wrong object
Wrong check on right object
Wrong check on wrong object
Check mistimed
Retrieval failuresSelection failures
Information not obtained
Wrong information obtained
Information retrieval incomplete
Information incorrectly interpreted
Selection omitted
Wrong selection made
Communication failuresPlanning failures
Information not communicated
Wrong information communicated
Information communication incomplete
Information communication unclear
Plan omitted
Plan incorrect

Table 3

PIFs for SCTA (Energy Institute, 2011)

Individual factors

Physical capability
Fatigue
Stress/morale
Work overload/underload
Competence
Motivation

Job factors

Clarity of signs, signals, instructions
System/equipment interface (labeling, alarms)
Difficulty/complexity of task
Routine or unusual task
Divided attention
Inadequate procedures
Task preparation (PTW, risk assessment, checklists)
Time available/required
Tools appropriate for the task
Working environment (noise, heat, space, lighting)

Organizational factors

Work pressure
Supervision/leadership
Communication
Staffing levels
Peer pressure
Clarity of roles and responsibilities
Consequences of failure to follow rules/procedures
Effectiveness of learning from incidents
Safety culture
Change management

Adapted from Health and Safety Executive (1999). Reducing error and influencing behaviour (HSG48). Suffolk:HSE Books.

6. Identify any additional safety measures that could be implemented to further mitigate PIFs and the risk of human failure.

7. Review the effectiveness of the process contributes to a wider understanding of SCTA and improvements to the technique.

A common problem is that all tasks are considered to be “safety critical.” Usually, it is the operational tasks that are the focus on the SCTA, e.g., chemical offloading operations, control room operations, or blending chemicals; however, checking tasks, emergency response, and maintenance tasks might also be included.

The UK Health and Safety Executive has made SCTA a requirement for acceptance of its Control of Major Accident Hazards (COMAH) Safety Reports. Table 2 lists the human error guidewords suggested by the HSE.

3 Performance Influencing Factors

The management of human failure is dependent upon understanding and responding appropriately to PIFs. The list of factors identified by the HSE (1999) shown in Table 1 outlines some PIFs; however for the purposes of SCTA, more comprehensive lists should be used (see Table 3).

Having outlined techniques used to identify different types of human error, the remainder of the chapter discusses some of the PIFs identified under the HSE, HSG48 framework shown in Table 1.

4 Individual Factors

The HSE (1999) lists the individual factors that can affect performance such as competence, skills, personality, and various person attributes such as attitudes to risk and safety. Competence can be defined as “A cluster of related abilities, commitments, knowledge, and skills that enable a person (or an organization) to act effectively in a job or situation” (Business Dictionary, accessed October 2016). The same source defines “skills” as “An ability or capacity acquired through deliberate, systematic and sustained effort to carry out complex activities or job functions involving ideas (cognitive skills), things (technical skills), and/or people (interpersonal skills).” Personality refers to a “relatively stable, consistent and distinctive set of mental and emotional characteristics a person exhibits when alone, or when interacting with people and his or her external environment.” Within the context of the chemical process industry, attitudes to risk and safety would refer to a tendency to respond positively or negatively toward hazards, their likelihood, their consequences, and the measures in place to mitigate the realization of the risks inherent in those hazards (author's own definition). Hence, someone could hold a positive attitude toward risk and a negative attitude to safety or a negative attitude toward risk and a positive attitude toward safety. This section will now expand on each of these individual factors.

4.1 Competence and Skills

Competence and skills are clearly of prime importance; however, there can be some debate about who is responsible for developing skills and competencies and keeping them up to date. Depending on the nature of the job/tasks and the skills and competencies associated with it, initial responsibility may lie with the individual or with the schools, colleges, and universities that are responsible for ensuring the people receive the necessary education. No matter what the job or tasks entail, it is safe to assume that basic numeracy and literacy skills will be required. It is up to the organization to provide an accurate job description and outline the relevant person characteristics required to carry out the job so that the right people can be selected from the outset. In the author's own experience, these job descriptions and person characteristics are sometimes inadequate and the suspicion is that the Human Resources Department has been left to write it, without any input from skilled practitioners; therefore, there is a lack of understanding of the true nature of the job and the type of person required for it. In order to recruit and retain the right people for the job roles and tasks in the organization, it is crucial that the right people are involved in writing the job description and person characteristics.

If one assumes that the right person is selected for the right job, then the organization may need to deliver some “on-the-job” training so that the new recruit can learn the tasks that he or she will be required to carry out in context. Again, in the author's experience, a school-leaver or graduate can very rarely come straight into a job and conduct their activities with the requisite level of competency; development “in role” under the tutelage of an experienced supervisor or mentor will be critical.

Once the basic skills and knowledge are acquired, there is a need for a period of on-the-job supervision to determine whether the individual is competent to carry out the assigned tasks. However, competencies should be kept up to date for example, when there are changes to the job or when there are new developments in systems, technology, legislation, or equipment. Refresher training may also be required for some tasks. Employees in the chemical industry will be trained and competent in the technical skills they require for their job, e.g., control room operations, chemical engineering, or maintenance; however, there are a set of nontechnical skills, which are widely used in other high-hazard, high-reliability industries, but do not appear to be prevalent in the chemical process industries.

4.2 Nontechnical Skills

Nontechnical skills training began in the aviation industry as far back as the 1970s, after the industry identified that many aviation accidents did not occur due to technical problems with the aircraft or the pilots’ lack of technical flying ability. Instead, flight data recorders and cockpit voice recorders identified nontechnical skills such as poor situation awareness, communication, teamwork, decision making, leadership, fatigue, and stress, as major contributors to aviation accidents. The industry therefore started to develop nontechnical skills training as part of Crew Resource Management (CRM), which has now become mandatory for all pilots and cabin crew. CRM refers to the flight crew's use of all necessary resources (systems, technology, equipment, and human) to ensure safe and efficient operation of the aircraft. Nontechnical skills are a critical part of CRM, particularly in the identification and recovery from errors (threat and error management). Nontechnical skills are generally divided into two subgroups: (1) cognitive skills (decision making, situational awareness) and (2) social skills (leadership, teamwork, and communication).

The use of simulators and Line-Oriented Flight Training (LOFT), which involves testing the crew's nontechnical skills, i.e., situation awareness, team working, decision making, in abnormal situations which have not been prebriefed, has facilitated this type of training over the decades. Test scenarios can be developed from various sources, but accident reports with a full emphasis on human factors issues are most often used. CRM and nontechnical skills are examined both in the simulator and in normal flight operations, and pilots have to keep these skills up to date in order to fly commercial aircraft. Pilots and cabin crew are also required to undertake regular refresher training.

Other sectors such as aircraft maintenance (Sian, Robertson, & Watson, 2016), maritime (STCW, 2010), nuclear (INPO, 1993), offshore production platforms (O’Connor & Flin, 2003), hospital operating theatres (Flin & Maran, 2004; Mitchell & Flin, 2008; Yule, Flin, Paterson-Brown, & Maran, 2006), and offshore well operations (Energy Institute, 2014; International Association of Oil and Gas Producers (OGP), 2014) have also adopted the principles of CRM and nontechnical skills training. It is worth noting that the development of guidance for CRM/nontechnical skills training in well operations has arisen directly as a result of the findings of the inquiries into the Deepwater Horizon disaster.

The content of CRM and associated nontechnical skills training will, out of necessity, reflect the industry in which the training is being designed and implemented (see Flin, O’Connor, & Mearns, 2002 for a review). Nonetheless, reference to the research papers and training development around these programs indicates that the subject areas of situation awareness, decision making, team working, leadership, and managing stress and fatigue are key components. Communication is considered to be the cornerstone of CRM and nontechnical skills training.

The author is not aware of CRM and nontechnical skills training in the onshore chemical process industries; however, it is likely that the components outlined earlier, particularly team working, leadership, and communication, are covered in some training programs for the industry. The importance and utility of CRM and nontechnical skills training is that it takes incidents and accidents as its starting point, and based on the detailed human factors analysis of those incidents, trains the skills that have been found to be lacking and have led to the incident occurring. It is therefore a way of “closing the loop” (Flin et al., 2002) to prevent the likelihood of human failure in the future. Through a focus on CRM and nontechnical skills, personnel can also be trained to recognize and trap errors before they develop into more series adverse events. The other interesting point about this type of training is that the aviation industry has developed and persisted with it, whereas other industries “chop and change” their training regimes to adopt whatever is fashionable at the time (the offshore oil and gas industry is a case in point). This consistency of approach means that these programs have had time to be integrated into the systems and processes of the industry and deliver the performance improvements they were designed to deliver.

Proper development of nontechnical skills training also requires sets of “behavioral markers” that can be used to identify whether employees are exhibiting the trained behaviors or not. Obviously, the development of behavioral markers will be industry specific and will require the combined skills of operational personnel (e.g., operators, engineers, maintenance staff) and human factors specialists. Performance can be assessed in training simulators if they are available or online, when the personnel will be assessed while actually conducting their tasks. Studies into the development of behavioral markers for nontechnical skills training in the aviation sector and hospital operating theaters include Flin and Martin (2001), Flin (2003), Fletcher et al. (2003), and Yule et al. (2008). CRM/nontechnical skills training courses will tend to consist of both theoretical and applied teaching.

In conclusion, nontechnical skills training provides a means by which human performance can be managed and improved. A number of safety critical industries have adopted this type of training, and it could be considered as a form of training for the chemical process industries.

4.3 The Role of Personality

The role of personality in determining safety performance has been well researched starting back in the 1950s with the idea of the “accident-prone” personality. Subsequent research and development has shown that so-called accident proneness may be a misnomer and again we find that a focus on PIFs is a more productive route to follow. Nonetheless, there are clearly individual differences in intelligence, aptitude, dexterity, skills, education, and motivation, and potential employees will tend to be selected for these characteristics rather than their personality, per se.

There is a vast literature on personality and a wide range of personality tests developed by psychologists over many decades. According to the American Psychological Association (APA, 2016), “personality” is defined as “individual differences in characteristic patterns of thinking, feeling and behaving.” The assumption is that “personality” is relatively stable and resistance to change and can be measured along a number of measurable “dimensions,” “factors,” or “traits.” The biological definition of “traits” is characteristics or attributes of an organism as expressed by genes and/or by the environment. When it comes to personality, a combination of genetics and the environment will lead to a particular trait developing in an individual. Many of the personality tests developed by psychologists are designed to identify individuals, who are suffering from clinical psychological conditions such as anxiety, depression, dementia, obsessive–compulsive disorders, and schizophrenia. From an occupational or industrial/organizational perspective, a number of personality tests exist for selection and development purposes. These tests are designed to not just measure personality but also assess aptitudes such as problem solving, social interaction, and situation judgment. One of the best-known occupational personality tests is the Myers Briggs, which assesses employees’ tendencies toward Introversion/Extraversion; Thinking/Feeling; Judging/Perceiving; and Intuition/Sensing. Results from these tests place people into one of a range of 16 different personality types, each of which has its own strengths and weaknesses. Although the Myers Briggs claims to identify 16 different personality types, most psychologists working in the area of personality testing nowadays recognize five main dimensions (i.e., the so-called Big 5, also known by the acronyms OCEAN or CANOE) with people scoring “high” to “low” on each trait:

 Openness to Experience—This personality trait reflects preferring variety in life, being attentive to inner feelings and having an active imagination, aesthetic sensitivity, and intellectual curiosity.

 Conscientiousness—Individuals who are conscientious tend to have high levels of self-discipline and are good at planning and striving to achieve long-term goals. They are often perceived as being responsible and reliable, but they can also be perfectionists and workaholics.

 Extraversion—Extraverts score highly on scales measuring sociability, assertiveness, energy, and talkativeness. By contrast, those scoring on the other end of the scale are Introverts, who enjoy spending time alone with their own thoughts and prefer solitary work and hobbies.

 Agreeableness—This trait speaks for itself. People who score highly tend to be warm, friendly, and tactful, with a positive opinion of others and are able to get along well with other people.

 Neuroticism—Neurotic individuals display the characteristics of being anxious, worried, moody, frustrated, or afraid. These characteristics are generally not considered to be conducive to good work performance; however, when it comes to risk and safety, neuroticism may have an important role to play.

The research evidence for the role of personality in industrial safety and accident involvement tends to be quite limited, with much of the literature focusing on driver personality and accident involvement rather than how personality traits affect attitudes to risk and safety in the workplace, particularly the chemical process industries. It is important to remember that any personality trait will not be exerting its influence in isolation but will also be subject to influence from the working environment (job factors) and the social environment (organizational factors) that the individual is exposed to.

4.3.1 Personality and Safety in the Workplace

A study conducted by Hansen (1989) investigated the relationship between accidents, biodata (e.g., age, job experience), cognitive factors, and personality in a sample of 362 chemical industry workers in the United States. The only personality trait investigated appears to have been neuroticism, in particular the social maladjustment and distractibility components of neuroses, both of which were found to have a relationship with accident involvement. Thus, individuals who were more socially maladjusted and prone to distractibility were more likely to have experienced a work accident.

Another study by Cellar, Yorke, Nelson, and Carroll (2004) examined the relationships between the Big 5 and workplace accidents in a sample of 202 undergraduate volunteers (134 women and 68 men). Clearly, this is not the best sample of “industrial workers” to focus on and the validity of this study could be challenged since it can be assumed that these students would probably not be working full time and may only have temporary jobs while completing their studies. The results showed that more Agreeable and Conscientious individuals were less likely to have been involved in accidents.

Clarke (2006a) investigated the role of perceptions of safety (known as “safety climate”; see later in this chapter), attitudes to safety (a personal expression of favor or disfavor toward risk and safety), and personality characteristics of traits on accident involvement in a wide variety of industries. The study took a “meta-analytic” approach, where relevant studies were identified from a literature search and the correlations between the variables of interest, i.e., safety perceptions, attitudes to safety, personality traits, and accident involvement, are analyzed. After an initially screening exercise against a set of criteria, only 19 studies out of total of 51 originally identified were included in the meta-analysis.

The results showed that negative safety perceptions were correlated most highly with accident involvement, followed by negative attitudes to safety and last of all, the personality characteristics with relatively low correlations. However, the personality trait “Agreeableness” was a better predictor of accident involvement than either safety attitudes or safety climate. Basically, people who demonstrate lower levels of agreeableness are more likely to be involved in accidents. This could be due to the fact that such individuals are less likely to be socialized into the norms of the organization and may also be less compliant with regulations, rules, procedures, and work instructions. Such individuals may be more likely to violate the rules and procedures that are in place to keep plant, people, and the environment safe. However, this is only conjecture on my part.

Clarke and Robertson (2008) demonstrated the role of agreeableness in accident involvement in another meta-analytic study. They identified 24 studies that had investigated the relationship between the “Big 5” Personality dimensions and self-reported accidents or personal injuries. The studies covered a wide range of occupations and nationalities, including Hansen's study on US chemical processing workers (production and maintenance) and UK personnel on offshore drilling rigs and production platforms. It should be noted that several of the studies were conducted on taxi or bus drivers from India, Turkey, South Africa, and the United States, thus covering the area of driver behavior rather than industrial safety. Although other personality dimensions were measured in some of the 24 studies, Clarke and Robertson (2008) only focused on the measurements of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Their meta-analytic study found that low Agreeableness, low Conscientiousness, and high levels of Openness and Neuroticism were all positively correlated with accident involvement, with little evidence of a relationship between Extraversion and accidents. In other words, people who were disagreeable and lacked conscientiousness were more likely to have been involved in accidents as were people who were more open to experience. This would seem to make sense because, as noted earlier, such individuals would have less tendency to comply with social norms and rules and regulations and a higher tendency to explore new ways of working, which again may not be compliant with the organization's way of doing things. However, somewhat surprisingly, people who were more neurotic also showed higher levels of accident involvement. This seems counterintuitive since one would have expected highly neurotic individuals to be risk averse and safety conscious. However, all personality dimensions except Agreeableness demonstrated evidence of being situation specific according to certain statistical controls used in the meta-analysis. This means that the effects of the personality characteristics were only evident in specific contexts, for example, a particular industry or occupation. For Agreeableness, the relationship with workplace accidents appears to be consistent across occupations.

There is one last study worth mentioning in the context of personality and accident involvement in a chemical process industry. This study was conducted in the UK offshore oil and gas industry (Sutherland & Cooper, 1991) and did not focus on the big 5 but instead took measurements of the Type A/Type B personality types, along with neuroticism and extraversion/introversion. Type As typically display behavior patterns such as competitiveness, hostility, and time pressure. These individuals are often categorized as “workaholics” and appear to live with high levels of stress in their lives. Type Bs, on the other hand, display more relaxed behavior, less competitiveness, and less hostility. Apart from questionnaire measurements of Type A/Type B personality, Sutherland and Cooper (1991) also used the questionnaire to measure neuroticism, extraversion/introversion, job satisfaction, self-reported stress, and accident involvement. The study involved 360 personnel working in the European offshore oil and gas industry on both production platforms and drilling rigs. These personnel had been surveyed previously, and this was a follow-up study 1 year later. The results showed that both the Type A behavior patterns and neuroticism were associated with increased accident involvement, lower job satisfaction, and higher levels of stress. Extraverts actually seemed to report more accidents than introverts, but Sutherland and Cooper found that introverts had also been involved in accidents, leading to personal injury; however, they were less likely to report them. The propensity to report may therefore be a characteristic of the extraverts. Type As were more prevalent in the sample of offshore workers used in this study and they also seem to be more characteristic of offshore workers in general (author's own observations and results from unpublished research), but Type Bs do exist offshore and there may be aspects of self-selection into offshore work and certain job roles, according to personality type. Not everyone can cope with the rigors of offshore life and long periods of isolation from family and friends. The role of stress was also examined in this study, and this topic will be covered in the next section.

In conclusion, the meta-analytic studies mentioned earlier indicate that only the personality trait of “Agreeableness” seems to have a consistent relationship with accident involvement across a wide range of industries, professions, and occupations with disagreeable personalities being more likely to have been involved in self-reported accidents. The other personality traits appear to exert their influence via associations with other factors such as safety climate, attitudes to safety, stress, and job satisfaction, which are factors that also exert an influence on the likelihood of being involved in an accident. It is therefore difficult to disentangle cause and effect here.

From the perspective of making interventions to improve safety, it is the author's personal belief that there is little an organization can do to change someone's personality; however, safety climate, levels of stress, job satisfaction, and attitudes to risk and safety can be managed and modified. Of course, organizations can select for particular personality types; for example, one could argue that only highly agreeable people should be selected to work for high-risk industries such as the chemical process industry. However, selecting for one particular personality trait may mean selecting out other personality traits that are beneficial for the organization in other ways. For example, the organization may also want personnel who are conscientious or competitive, or are quiet, thoughtful, analytic, and introverted. Therefore, it is suggested that personality is not an area an organization should focus on if it wants a viable, flexible, and competitive business. Nevertheless, it may be a factor to take into consideration when candidates are closely matched on a person specification for a job description and recruiters are having difficulty in making a recruitment selection. This may be particularly relevant when selecting candidates for a safety critical position in the chemical process industry.

4.4 Occupational Stress

No discussion of PIFs would be complete without reference to occupational stress, although where it fits within the individual, job, and organizational factors framework is a point of debate, since all these factors can be associated with the experience of occupational stress. There is an extensive literature on the subject and it is one factor that has been consistently related to accident involvement in a number of industries. Sutherland and Cooper's (1991) study on stress has already been mentioned, but there are many other studies, some of which have been conducted in the chemical process industries. The word “stress” is widely used, but occupational or job stress is specific to the workplace and arises from the conditions experienced there. Like so many human factors issues, stress will not exist in isolation from other PIFs and it can be debated as to whether occupational stress comes under the category of an individual, job, or organizational factors. In reality, it can span all three. For example, personality, lack of skills and competence, excessive workload, inadequate supervision, badly written procedures, and a poor safety climate impact on stress and therefore on the performance of both the individual and the organization. Of course, factors external to the work environment will also play a role in creating stress, for example, problems with work–life balance due to shift work or excessive overtime. Fatigue is often treated as a separate issue, but when considered within an occupational setting the author considers fatigue to be closely related to job stress, creating a vicious circle where the stress leads to tension, worry and lack of sleep, and the increase in fatigue makes it less easy to cope and therefore creates more stress. The techniques to manage both stress and fatigue are very similar and will be covered in the discussion later.

Occupational stress has been defined as the physical and psychological states that arise when an individual no longer has the resources to cope with the demands and pressures of the situation (see Michie, 2002 for a comprehensive review). It is considered to be the result of the interaction between the person and the environment. This is important to remember, since one person's “stress” may be another's “challenge”; therefore the feeling of being stressed and unable to cope is very much an individual-level phenomenon.

The causes of stress are manifold. They include factors that are intrinsic to the job such as work overload, time pressure, and poor working environment, e.g., noise, cramped conditions, lack of adequate lighting. The person's job role may also be an issue, such as role ambiguity and conflict; i.e., goals and expectations are not clear. Employees might have a poor relationship with their boss or with colleagues or they may feel that they are not being promoted quickly enough or are insecure about their job. Finally, the climate, culture, and structure of the organization may be a problem with a lack of participation in decision making, communication, and consultation.

We have already covered some of the factors that the individual will bring to the situation that contribute to feelings of being stressed. These include personality characteristics such as anxiety, neuroticism, and the Type A behavioral pattern. Factors external to the job such as family problems or life crises can be stressors, but research has shown that the main contributor to job stress tends to the organization itself and how people are being managed within that organization. Well-designed workplaces with clear roles, responsibilities, and expectations for their staff will tend to experience less work-related stress. Adequate training and competency programs, well-written procedures, an engaged workforce, and supportive supervision and management will also contribute to a reduction in stress. As is so often the case, addressing PIFs is the key to good safety performance.

The consequences of stress for the individual include physiological responses such as increased blood pressure and heart rate and behavioral responses such as increased smoking and drinking and either over- or under-eating. Needless to say, prolonged exposure to stress can have a long-term impact on both physical and mental health, e.g., coronary heart disease and depression. Researchers and medical professionals distinguish between acute and chronic stress. Acute stress is the immediate “flight or flight” response to a perceived threat, which leads to physiological changes in the body such as the release of adrenalin (associated with “butterflies in the stomach” and increased breathing rate). The body is then activated to deal with the threat either by fighting it or by running away, but in modern-day society, this is not necessarily a practical solution and so the individual starts to suffer from chronic stress, which is the prolonged exposure to stressors from the environment. It is this prolonged exposure that is considered to be dangerous and can lead to mental and physical consequences.

The consequences for the organization of chronic stress can include reduced quantity and quality of work, increased absenteeism and high turnover of staff, and reduced job satisfaction and morale. Ultimately, if employees are increasingly stressed and end up leaving, the organization's reputation might be damaged and it might find it harder to recruit personnel in the future. Of course, other potential negative consequences for both the individual and the organization are reduced levels of safety and increased accident involvement. There are many studies reporting the impact of stress on safety performance in the industrial context, although very few appear to have been conducted in the onshore chemical process industries. However as way of example, some of the studies conducted in the offshore oil and gas industry will be discussed here.

4.4.1 Occupational Stress in the Offshore Oil and Gas Industry

The offshore environment is an inhospitable place; installations are usually located far from land and are effectively chemical processing plants with accommodation built on top. For anyone who has ever visited an offshore installation, it will immediately become apparent that they are not built with humans in mind. Space is at a premium and stairways, doors, and gangways can be small and narrow. Parts of the plant can be difficult to access, and there appears to have been minimal ergonomic input into the design of installations. Due to their remote nature, workers are often required to spend up to 3 weeks offshore providing an added stressor of being remote from family and friends. Apart from being exposed to the risks arising from loss of containment and from flying in a helicopter to reach the installation, offshore workers are exposed to other psychosocial stressors such as noise, vibration, weather (cold, heat, wind, rain, snow), 12-h shifts, and sometimes dull monotonous work with little opportunity for developing new skills.

Sutherland and Cooperoper (1986, 1991, 1996) conducted some of the earliest studies investigating the relationship between occupational stress, mental health, and accidents in the offshore industry. Their studies included two samples of offshore personnel from the European offshore industry (mostly British): a sample of 190 in 1986 and a sample of 310 in 1991. A questionnaire for “stress auditing” was developed from interviews with offshore personnel, asking about aspects of their job, lifestyle, and accident involvement. Job satisfaction, psychological health, and social support were also measured. Analysis of the “stressor” questionnaire indicated 12 factors: career prospects and reward; safety and insecurity at work; home/work interface; under stimulation, i.e., low demand; physical conditions—working and living; unpredictability of work pattern; living conditions; physical climate and work; organization structure and climate; physical well-being; work overload; and transportation (e.g., flying in a helicopter). The top three causes of stress were pay and conditions, i.e., rate of pay, lack of paid holidays, and pay differentials between operating and contracting staff but relationships at home and at work were also of significance and these measures were correlated with job satisfaction and mental health. Accident victims reported less job satisfaction and poorer mental health, but it is difficult to ascertain cause and effect here since in relation to this chapter, the key issue is whether stress impacts on safety performance and leads to accidents.

Sutherland and Cooper (1986, 1991) raised the possibility of this relationship, and Rundmo (1992) suggested that stress could play an indirect role to play in accident causation among Norwegian offshore workers. However, in an extensive study of health and well-being in the UK offshore industry sponsored by the Health and Safety Executive, Mearns and Hope (2005) used the “Health Offshore” questionnaire to survey 1928 offshore workers on 31 installations to evaluate their perceptions of Health Climate and the measures that had been put in place to manage their health and well-being. In one section, respondents were asked to rate the extent to which they felt they were able to cope with any pressures experienced at work. Most respondents’ felt they coped well and there did not seem to be a significant relationship between self-reported levels of stress and accident involvement. Overall, 21% of respondents indicated that they had received support (in the form of advice, information, guidance) to help them cope with the stress they experienced in the workplace. When the effectiveness of this support was rated, there was considerable variation between installations, but the general consensus was that current forms of support offered were only moderately useful. This is somewhat surprising given that 50% of the offshore medical staff who responded to the questionnaire reported that they were trained in stress management. Furthermore, 37% of the 31 installations involved in the study offered training courses on stress management for their workforce.

In conclusion, the research evidence suggests that it is the consequences of work-related stress, such as psychological and physical ill-health, fatigue, and the way that people are thinking, feeling that can cause human error or violations and this is what causes accidents rather than stress in itself. It is also worth noting that people suffering from the symptoms of stress may be self-medicating or receiving assistance from medical practitioners to alleviate those symptoms; thus the medication may impact on performance causing drowsiness, disturbed vision, and so on.

4.4.2 Managing Occupational Stress

The best way to manage occupational stress is to address the organizational issues that affect it such as improving employee awareness of stress; implementing regular stress assessments; developing a stress management policy and procedures; mitigating the impact of organizational change and job uncertainty on the workforce by good communications; and developing a positive health and safety culture which includes the reporting of psychological ill-health and considers the impact of stress and its associated symptoms when investigating accidents and incidents. Furthermore, as the next section will demonstrate, attending to job factors such as good design, procedures, and working environment will further mitigate the causes and consequences of occupational stress.

5 Job Factors

Job factors include the nature of the task and the environment it is conducted in. This covers factors such as equipment, workload, procedures, and displays and controls. As for all the other sections of this chapter, it is not possible to focus in any detail on the entire list of job factors, so this section will only discuss a few. As a starting point, it is important to know that a raft of measures exist to ensure that equipment, control rooms, plant, and processes are designed in accordance with key ergonomics standards (see the Engineering Equipment & Material Users Association—EEMUA). For example, equipment should be designed in accordance with EN614 Parts 1 and 2 and control rooms should be designed in accordance with BS EN11064, EEMUA 191, and EEMUA 201. The reader is referred to the EEMUA website and these standards for further information. Other recommendations are that consideration should be given to the operators’ body size, strength, and mental capability, and both plant and process should be designed to facilitate operation and maintenance. The design should take account of all phases of the plant life cycle, including decommissioning and all foreseeable operating conditions such as plant upsets and emergencies. Finally, consideration should be given to the interface between the end user and the system and one way to ensure this is to involve users in the design process. Users should include plant operators, control room operators (CROs), maintenance staff, and systems support personnel. Unfortunately, this involvement often seems to be overlooked in the design process. Cost also seems to play a role in the suboptimum design; however, it is important to note that the proper design with humans in mind can prevent accidents and incidents further down the line in the life history of the plant and equipment.

5.1 Design of Control Rooms

Control rooms provide an important safety critical barrier to MAHs in the chemical process industries; however, CROs can have a number of challenges to deal with. These challenges include having to deal with too many alarms simultaneously (alarm flooding), several safety critical tasks that have to be performed simultaneously (workload), communications equipment and display equipment positioned apart in the control room even when they need to be used together (design), and uneven workloads, with long periods of monitoring tasks interspersed with periods of high intensity when dealing with abnormal situations.

It is therefore recommended that due care is put into designing control rooms for normal, abnormal, and emergency situations. There are six main areas that require attention:

1. Layout

2. Working Environment

3. Control and Safety Systems

4. Job Organization

5. Procedures and Work Descriptions

6. Training and Competence

The Safety and Reliability Group at SINTEF (2004) developed the CRIOP methodology to contribute to “verification and validation of the ability of a control center to safely and efficiently handle all modes of operation including start-up, normal operations, maintenance and revision maintenance, process disturbances, safety critical situations and shut down” (p. 2). CRIOP is a registered trademark and stands for a scenario method for Crisis Intervention and Operability Analysis. Organizations such as Statoil and Norsk Hydro were involved in its development for the Norwegian offshore sector, and the methodology was based on interviews, user discussions, workshops, and contributions from experts.

e-Operations have been added as a seventh area in the 2004 version of CRIOP because remote control and remote operations have been a recent development for the Norwegian offshore industry. This is partly to reduce risk to personnel, since fewer people will be required to travel offshore to conduct the work. It also can be considered a cost-efficiency measure.

CRIOP works on the basis of checklists and scenarios. The checklists used in design and operations are based on a “best practice,” including standards and guidelines such as ISO11064, EEMUA 191, and IEC 61508. The Norwegian Petroleum Directorate regulations have also been taken into account. The checklists have been laid out in a particular way for clear and easy usage. There is a checklist for each of the six areas, which comprises numbered points, e.g., 1, 1.1, 1.2, etc.; a Description, e.g., Are breaks planned/coordinated with control center tasks? Yes/No/Not Applicable; Reference to Documentation, e.g., ISO 11064-1; Comments and Recommendations and Responses.

The final stage of the CRIOP process is Scenario Analysis, which consists of Introduction, Planning, Participants, Duration, Group discussions, Documentation, Number of Scenarios, Organizational Learning, and Framework. This is followed by the identification of actors who will be involved in the events making up the scenario. Observations are made of how the events of the scenario are enacted and interpreted including planning and decision making and action/execution. A checklist of PIFs (referred to as Sociotechnical factors in CRIOP) is used in the scenario analysis. Actions plans are developed from the outcomes of the scenario analysis with the implication that management implements these action plans.

In conclusion, CRIOP provides a method whereby the full complement of human factors issues can be addressed through a systemic, applied enactment of events within process industry-specific scenarios, particularly involving the all important control room at the center of operations.

5.2 Procedures

In the author's personal experience in over 25 years as an academic and consultant, frontline workers in the chemical process industries constantly raise the usability of procedures as an issue. This is particularly the case for maintenance personnel, who seem to be largely ignored when it comes to procedural usability. One of the main complaints raised in questionnaires and focus groups with frontline workers is that “engineers,” who never visit the worksite to understand the context in which those procedures are to be used, write the procedures. Furthermore, work as “planned” rarely fits exactly with work as “executed,” with the exception of the simplest of tasks. This means that frontline workers constantly have to adapt and adjust the procedures to use in order to achieve their work objectives. When this happens, the new way of working can gradually become the “norm” and drift away from the original intent of the procedure. New recruits observe, copy, and conduct their work in the new way, thereby being “normalized” by the more experienced workers, who may have forgotten what the correct procedure is. Another problem with procedures is that they can grow “arms and legs” and become lengthy and unwieldy. This often occurs in response to accidents and incidents, where a new section (or sections) is incorporated to prevent such an incident happening again.

It is important to clarify what is meant by the word “procedures.” This term can mean different things to different people, depending on their role in the organization. The Business dictionary (http://www.businessdictionary.com) refers to a procedure as “A fixed, step-by-step sequence of activities or course of action (with definite start and end points) that must be followed in the same order to correctly perform a task.” Repetitive procedures are called routines, but there are also method statements/specifications and work instructions. A method specification is described as a “Statement of requirements that prescribes a method of achieving a desired standard, instead of prescribing the standard itself.” Method statements are likely to be used by designers, engineers, or managers. A work instruction is “A description of the specific tasks and activities within an organization. A work instruction in a business will generally outline all of the different jobs needed for the operation of the firm in great detail and is a key element to running a business smoothly.” In the author's experience, frontline workers are usually operating according to work instructions but often with reference to higher order procedures. In addition, they have to comply with rules and regulations. A rule is defined as “an authoritative statement of what to do or not to do in a specific situation, issued by an appropriate person or body. It clarifies, demarcates, or interprets a law or policy.” A regulation is “a principle or law (with or without the coercive power of law) employed in controlling, directing or managing an activity, organization or system.” It goes without saying that rules and regulations are the province of managers and leaders, but the expectation is that they are understood and followed throughout the whole organization.

Frontline workers will be expected to work according to a set of procedures or work instructions. The procedures might be generalized to a number of different tasks and situations, but the work instructions will be specific to the tasks being conducted on a day-to-day basis. Work instructions tend to be relatively clear and straightforward, but procedures can often be seen as lengthy and ambiguous. In order to help with the process of writing clear and concise procedures the UK Health and Safety Executive has produced guidance to preparing procedures in their Human Factors Briefing Note No. 4. The Major Accident Prevention Policy (MAPP) should describe how procedures should be developed, reviewed, revised, and publicized. In particular, COMAH sites should have a procedure for writing procedures, which should cover which tasks require procedures, how detailed those procedures should be, how to keep them up to date, and how to ensure compliance with procedures. It is also important that procedures are reviewed on a regular basis through consultation with users, walkthroughs of the procedures during actual work tasks, and identification of “informal” procedures or “workarounds.” Analysis of incidents in which noncompliance with procedures has been identified as a causal factor is also recommended. Table 4 outlines a procedural checklist developed by the HSE.

Table 4

Procedures Checklist from HSE Human Factors Briefing Note 4

Procedures should beExamples
Always easy to find, particularly for:Operational, commissioning, maintenance, abnormal/emergency tasks
Are completely up to dateInvolving users should help to ensure this
Set out in logical stepsStarts with general instructions and works down to specifics
Very easy to readUse words people understand
Use diagrams, pictures, flowcharts, checklists
Size, color, style of lettering, and illustrations are clear
Are accurateThere are no inconsistencies and inaccuracies in the content
Highlight steps where care is requiredFor example, when a particular hazard is present
Describes items of special equipmentFor example, tools and clothing
In good conditionNot dirty, torn or with pieces missing
Used to train people to do the jobThis ensures compliance with the correct procedure
Changed quickly if the job changesShould be considered under management of change
Consistent with other informationFor example, verbal instructions from supervisors
Supplemented by other job aidsPocket-sized checklists, reference materials

The HSE guidance recommends using task analysis to fully understand how a job is actually done, including the identification of hazards and whether a procedure is the best way of controlling the hazard. As noted earlier, it is important to involve the people who actually do the task in writing the procedure. They will have the most realistic view of how the task is done (as compared to managers who may have a view on how the task should be done). Those who use the procedures can also advise on how and why the procedure might not be complied with (i.e., anticipate potential violations) and can also advise on the best wording and style of layout to use. Finally, involving the frontline workforce in developing procedures ensures ownership of the procedure and therefore more motivation and commitment to actually using the procedure in the way it is intended.

Other advice covers training in the procedure, including the training of contractors. This will ensure workers are familiar with the content and can point out any errors or impracticalities. It is important that contractors are familiar with the terminology used and that the needs of novice users are taken into account. Finally, make sure that procedures can be found quickly and easily on the organizational system.

Proper management of procedures is also important. Keep checking that they are being used properly and if not, find out why. Workers may have found an easier way to complete the task, but there new method might entail some inherent risk. On the other hand, the new way of doing things might be both more efficient and safer. Make sure that there is a system for capturing problems and that any problems are dealt with as quickly as possible. If a problem cannot be dealt with efficiently, then an explanation is required as to why.

Managers should plan for any changes to the task due to changes in equipment or the methods used. If it is not possible to change the procedure quickly and get people trained up in it, then temporary working instructions or extra supervision may be required. Procedures should be controlled. This often occurs by an instruction that the procedure is uncontrolled if printed (for example, only procedures accessed from the company website are controlled). Finally, a log should be kept of who is responsible for the procedure and any out-of-date procedures should be disposed of.

6 Organizational Factors

At the organizational level culture, leadership, shift and rotation patterns, staffing levels, and communications are key factors. It could be argued that many individual and job factors originate at the organizational level, and if human factors is not integrated into the safety management system (SMS), it is unlikely that the SMS will achieve its objectives. The integration of HF into the SMS will lead to a positive safety culture, where the processes and practices as outlined in the SMS become recognized good practice.

6.1 Safety Climate and Safety Culture

Safety climate and safety culture deserve a lengthy section in this chapter, reflecting both their complexity and the academic and practitioner interest shown in these concepts. They are particularly important because public inquiries into most major accidents involving highly hazardous materials have found a distinct lack of safety culture (or climate) to be one of the underlying factors contributing to the accident.

There are numerous research articles and book chapters available, describing how to measure both safety culture and safety climate and demonstrating their relationship with safety performance. Although used interchangeably, the history and etiology of the two concepts is very different and ultimately the concepts reflect two different but overlapping aspects of organizational safety in high-hazard industries such as the chemical industry.

6.1.1 Safety Climate

Professor Dov Zohar is credited with publishing the first study on safety climate in Journal of Applied Psychology in 1980. According to Zohar (1980), safety climate represents a summary of the molar perceptions that workers have about how safety is managed in their working environment. He developed a 40-item questionnaire measuring 8 safety climate factors in the Israeli manufacturing industry. These factors were:

 importance of safety training;

 effects of required work pace on safety;

 status of the safety committee;

 status of the safety officer;

 perceived level of risk at the work place;

 management attitudes toward safety;

 effect of safe conduct on promotion; and

 effect of safe conduct on social status.

The technique Zohar used to analyze the responses to his questionnaire is referred to as Exploratory Factor Analysis (EFA), which uses a mathematical approach to reduce a number of measurable and observable variables, i.e., responses on a 40-item questionnaire measuring perceptions of safety, to fewer number of “latent variables,” i.e., 8 factors, which are unobservable. These 8 factors are therefore hypothetical constructs used to represent the measurable variables. Effectively, the EFA attempts to find out which items on the questionnaire “go together” to form the fewest number of “factors” possible. If questionnaire items share a “commonality” (determined largely by correlational techniques), it makes sense to progress research and development on a few factors rather than on 40 questionnaire items.

In order to determine whether the “factors” measure what they are supposed to measure, they are often validated against another measure of safety, e.g., accidents or near misses. In Zohar's study, the safety climate measurement was validated against inspectors’ evaluations of the safety performance in the various factories, where the climate survey was deployed.

Brown and Holmes (1986) tried to replicate this factor structure using confirmatory factors analysis in a US sample (this statistical technique sets up a predetermined theoretical “model,” which the questionnaire data are tested against using similar statistical principles to the EFA—in this case an eight-factor model), but found a three-factor model instead which they labeled; employees’ perceptions of management concern about their well-being, management activity in responding to problems with well-being, and their own physical risk. Further research on measuring safety climate appeared to become conflated with the measurement of “safety culture” (probably due to the emergence of the concept of safety culture following the Chernobyl nuclear disaster in 1986), until the early 1990s when Dedobbeleer and Beland (1991) investigated safety climate in the construction sector and tried to replicate Brown and Holmes’ factor structure. Instead, they found that construction safety climate was best represented by two factors: management commitment and workforce involvement. The literature then became studded with a plethora of studies on safety climate in various high-hazard industries such as offshore oil and gas (Cox & Cheyne, 2000; Mearns, Flin, Gordon, & Fleming, 1998—although they refer to safety culture in offshore environments), construction (Poussette, Larsson, & Törner, 2008), road administration (Niskanen, 1994), the chemical industry (Berg, Shahriari, & Kines, 2013; Donald & Canter, 1994; Vinodkumar & Bhasi, 2009), and in nuclear reprocessing (Lee, 1998) and nuclear power plants (Lee & Harrison, 2000)—again these two nuclear studies refer to “safety culture” no doubt reflecting the introduction of the term following the Chernobyl disaster. Each study found that the statements used in their questionnaires loaded on slightly different factor structures, possibly reflecting the management structure and arrangements and underlying “professional culture” of the organizations and industries being targeted. It is also the case that questionnaires used in the various studies are developed from first principles with the researchers developing their own question sets which will differ from study to study. It is therefore not surprising that different factor structures emerge, since different “measurable” questionnaire items are used to measure the underlying, latent constructs.

In many ways, one would not expect “safety climate” to remain a static phenomenon as lessons are learned from incidents and there are new developments in legislation, technology, equipment, workforce training and competency, leadership and management for safety, etc. Therefore, over time, different factors may develop or become more relevant. Flin, Mearns, O’Connor, and Bryden (2000) reviewed a number of safety climate studies and concluded that the main themes in safety climate measurement tools, appearing in two-thirds of questionnaires available at that time, were related to management, safety systems, and risk. A more recent cross-validation of safety climate scales using confirmatory factor analysis (Seo, Torabi, Blair, & Ellis, 2004) found that safety climate grouped around five key themes: management commitment to safety, supervisor support for safety, coworker support for safety, employee participation, and training and competence. It is the current author's belief that this makes empirical sense, since the safety climate is supposed to measure how safety is managed by the organization, site or plant, depending on the level being targeted by the study.

One aspect that is strongly supported by safety climate research is the notion that climate is a “group phenomenon,” i.e., the perceptions are shared among members of the workforce (see Zohar, 2010). This means that members of the workforce from the same location or within the same work group, e.g., operations, maintenance, show significant consensus about their safety perceptions, compared to their attitudes to safety (Poussette et al., 2008). Indeed, the prevailing atmosphere with regard to risk and at the place of work is critical in defining the characteristics of a safety climate. This is demonstrated most clearly by the work of Mearns et al. (1998), who demonstrated that offshore installations operated by the same oil and gas company, e.g., BP, Total, Conoco, have very different “safety climates” as reflected by workforce perceptions of supervisor and management commitment to safety.

6.1.2 Safety Climate in the Chemical Industries

From the perspective of the chemical process industries, each study seems to use its own safety climate measurement tool, making direct comparisons between studies difficult. Donald and Canter (1994) developed a questionnaire to measure safety climate and safety attitudes in UK chemical plants. Their study established the validity and reliability of their questionnaire and found a relationship between the safety climate and occupational injuries. It has not been possible for the current author to access the full paper and therefore the findings reported here only reflect the content of the abstract.

Vinodkumar and Bhasi (2009) developed a new questionnaire to measure safety climate in the chemical industry in India. This questionnaire was based on existing measures but was designed to better reflect the Indian “national culture.” The questionnaire as then tested on 2536 employees (workers, firstline supervisors, and managers) across 8 chemical industrial units in Kerala, India. The results indicated eight factors:

 management commitment and actions for safety;

 workers knowledge and compliance with safety measures;

 workers attitudes toward safety;

 workers participation and commitment to safety;

 safeness of work environment;

 emergency preparedness;

 priority for safety over production; and

 risk justification.

These eight factors are similar to those identified in other industries and in other countries. Also, the safety climate scores differed between the chemical companies and across the different levels of hierarchy with workers being the least positive in their safety climate perceptions and management being the most positive. Finally, the more positive safety climates were correlated with lower self-reported accidents, a finding which has also been corroborated in other safety climate studies.

Berg et al. (2013) used the Nordic Occupational Safety Climate Questionnaire (NOSACQ-50) to survey workers at two chemical plants in Sweden. The NOSACQ-50 consists of 50 items measuring perceptions of: management commitment and priority to safety; worker commitment and priority to safety; safety empowerment for the workforce, safety justice, safety communication and trust in safety systems. In common with other safety climate studies in other industries, the workforce had more negative perceptions of safety climate than supervisors and management, although perceptions of the climate were generally positive. Furthermore, on both plants, shift workers had significantly lower scores on all the safety climate scales than daytime workers.

In conclusion, these safety climate studies indicate a certain level of consistency with regard to the emerging factors. Management commitment and the priority given to safety seem to predominate, with worker engagement and commitment, supervisor support, understanding the risks and safety systems also prevalent. The importance of management and leadership cannot be underestimated and this chapter devotes a separate section to the subject. The next section deals with the concept of safety culture and how it differs from safety climate.

6.1.3 Safety Culture

The term “safety culture” seems to have first been used in relation to the 1986 Chernobyl nuclear disaster. Both the International Atomic Energy Authority and the OECD Nuclear Agency identified a “poor safety culture” in the former Soviet Union nuclear industry as a contributory factor in the accident. In the wake of Chernobyl, the UK Advisory Committee on the Safety of Nuclear Installations (HSC, 1993) developed what has become one of the most cited definitions of the concept, i.e.:

The safety culture of an organization is the product of individual and group values, attitudes, perceptions, competencies and patterns of behaviour that determine the commitment to, and the style and proficiency of, an organization's health and safety management.

HSC (1993, p. 23)

Despite its widespread use, on first appearance this definition appears to be too general and not theoretically or empirically grounded; however; Cooper (2000) defends this wide-ranging definition by pointing out that the reciprocal nature of interactions between individuals, groups, situations; and behavior is the essence of a safety culture. Moreover, the HSC report makes the valid point that a safety culture is “more than a sum of its parts” and is therefore challenging to both measure and manage. Ultimately, the goal must be to demonstrate a focus on safety through both management commitment and workforce engagement, both reflecting a “top-down” and a “bottom-up” approach, rather than reflecting one or the other group's perspective.

It is interesting to note that the concept of “safety culture,” invoked in response to this disaster, was then mentioned in Public Inquiries into other major accidents throughout the 1980s and 1990s, e.g., the Herald of Free Enterprise ferry sinking (1987), the Ladbroke Grove rail crash (1999), and the destruction of the Piper Alpha offshore installation (1988). The term has continued to be applied throughout the 2000s and 2010s, e.g., Columbia Space Shuttle explosion (2003), Texas City refinery explosion (2005), the loss of the RAF Nimrod MR2 Aircraft in Afghanistan (2006), Deepwater Horizon (2010), and the Fukushima Daichii nuclear disaster (2011). As Mearns, Whitaker, and Flin (2003) note, safety culture was a concept originally used to describe the inadequacies of safety management that resulted in major disasters; however, the concept is now being applied to explain accidents at the individual level, despite the fact that the validity of safety culture with regard to individual accidents is yet to be established. However, the validity of safety climate in relation to occupational accidents and injuries has been clearly demonstrated in a number of studies (e.g., Christian, Bradley, Wallace, & Burke, 2009; Clarke, 2006b; Mearns et al., 2003).

6.1.4 Theoretical Approaches to Safety Culture

Reason (1998), Pidgeon and O'Leary (2000), and Guldenmund (2000), among others, have provided theoretical perspectives on safety culture. For example, Reason (1998) proposed that safety culture consisted of being informed about the organization's state of safety, which is dependent in turn on having a reporting culture and learning from these reports. However, people will only report errors and incidents if they are dealt with in a just and fair way, i.e., the organization operates a just culture. Finally, Reason believed that being flexible was also important. This means having the ability to reconfigure in the face of high tempo operations or certain kinds of danger, usually by moving from a typical organizational hierarchical structure to a flatter structure where people with the right level of expertise make the assessments and take the decisions.

Reason believed that it should be possible to “engineer a safety culture” by identifying and fabricating these essential components and building them into a working whole; however, a perusal of some of the other theoretical musings about the nature of culture seems to indicate that this may be easier said than done. The simple fact is that an organization does not just have one single “safety culture,” and it is made up of “subcultures” of personnel from different occupational and professional disciplines. Measuring and managing a safety culture involves understanding those subcultures and the interaction between them.

Pidgeon and O'Leary (2000) argued a “good” safety culture might be promoted by four factors: senior management commitment to safety; realistic and flexible customs and practices for handling both well-defined and ill-defined hazards; continuous organizational learning through practices such as feedback systems, monitoring, and analysis; and shared care and concern for hazards across the entire workforce. This closely resembles both definitions of safety climate and culture, but, in the author's opinion, reflects more deeply seated beliefs and assumptions about the nature of risk and safety and how they are being managed.

An increasing number of organizations are starting to understand the importance of measuring safety culture as “leading indicators” of the organization's state of safety; however, my own personal conclusion is that the lack of a culture is more likely to be associated with major process incidents rather than occupational injuries such as slips, trips, and falls.

Seo et al. (2004) suggest that climate is more of a state (or mood), provides a “snapshot” of the organization's safety, and is measured “quantitatively” via a questionnaire. They suggest that culture is a stable, deeply held trait, more akin to personality and is measured via “qualitative” methods such as interviews and focus groups. This is the definition and the measurement methods that the current author subscribes to although other definitions and methods for measuring safety culture exist.

6.1.5 Conclusion

In conclusion, both the theoretical and empirical evidence suggest that safety climate and safety culture are distinct but overlapping concepts. Safety climate refers to the shared perceptions of the “state of safety” at a place of work, largely inferred from how supervisors and management behave in relation to the management of risk and safety, i.e., their demonstration of safety commitment. Safety climate can therefore be considered to reflect a particular “atmosphere” at the workplace and has mostly been studied in relation to its impact on occupational accidents and injuries. Given its location-specific focus, it is possible that the key to good safety climate is heavily dependent on how supervisors’ commitment to safety is perceived by the workforce. Nevertheless, proper supervision can only be achieved if more senior management commits time and resources to the supervisors in order to meet their safety commitments.

Safety culture is considered to be a more deeply rooted concept based on the values, beliefs, attitudes, and behavior of organizational members and how this impacts on the safety performance of the organization and the safety systems developed to protect that organization. The notion of a “lack of safety culture” tends to be invoked after major accidents, where government organizations and the general public at large consider it almost inconceivable that any high-hazard industry should be so remiss in its approach to safety, as to allow such adverse events to occur. Increasingly, the research evidence seems to suggest that senior management oversight and commitment to safety is the key factor influencing safety culture. This very much subscribes to a “top-down” approach to safety culture largely influenced by the psychology and management disciplines. Another body of thought, mostly derived from the sociology and social anthropology traditions, considers safety culture from the “bottom up,” i.e., that the workforce and the groups that make up the workforce develop their own “cultures,” which may keep them “safe” despite poor management oversight and inadequate of safety management processes and procedures. The interested reader is referred to a Special Issue of Safety Science published in 2000 for further discussion.

Workforce engagement and involvement is an important component of safety culture, but again, the true value of that engagement can only come about if management is willing to relinquish some power and control in order for the workforce to contribute to SMSs and processes. For many years, the focus in the chemical process and other high-hazard industries has been on “human error” and the mistaken belief that frontline workers are to “blame” for those errors. However, Reason (1997) has pointed out that “Rather than being the main instigators of an accident, operators tend to be the inheritors of system defects created by poor design, incorrect installation, faulty maintenance and bad management decisions. Their part is usually that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking.” This therefore leads on to the final section of this chapter, which will focus on leadership and management for safety.

6.2 Leadership and Management for Safety

There has long been a focus on what constitutes effective leadership and management in organizational settings, with a multitude of books, chapters, research articles, consultancy reports, training programs, and theories espousing and promoting the value of good leadership and management skills to ensure the success of an organization. While the general area of leadership and management has been well researched and reported, there has been less focus on the skills, knowledge, and attitudes that leaders and managers require to “lead and manage” for safety. The argument could apply that it would be same set of attributes that are required for good general leadership and management; however, there is something different about the motivational drivers for safety performance compared to other organizational performance metric such as balance sheets, turnover, and productivity. The one major difference (in this author's opinion) is that with safety “nothing happens.” Safe performance is the expectation and the norm. “Unsafe” performance is usually presented as occupational accident statistics that have an impact on individual lives but less of an impact on the overall organization and many lives. Major process incidents can lead to multiple fatalities of both employees and the general public and considerable damage to the plant and environment, e.g., Bhopal, Flixborough, Texas City. As has been clearly demonstrated for accidents such as Texas City and Deepwater Horizon, reductions in lost-time injuries are not a reliable indicator of how well major hazards are being managed. The organization must be set up to prevent all forms of harm and the driving force for this state of affairs must be senior management, who set the overall direction and ethos for the organization. The key role of senior management in setting and promoting the safety culture of the organization has been demonstrated in most of the research on the subject (Health and Safety Executive, 2003; International Association of Oil and gas producers, 2013; Prior, 2003).

The following sections consider the role of safety leadership at all levels of the management chain from supervisors and team leaders, through middle managers to CEO and Board level. Leadership and management are closely related, with managers planning, organizing, and maintaining the structure and systems of the organization and leaders inspiring and motivating the workforce to achieve organizational goals (of which safety is one of many). A successful business depends on having people with both qualities in the ranks of its management teams. Much of the research on leadership and management for safety has been focused at the supervisor level, but Public Inquiries into major accidents have identified that culpability ultimately lies at senior management level and this accountability is now recognized in legislation, for example, in the UK HSE's Corporate Manslaughter and Corporate Homicide Act 2007. The UK HSE has also published guidance in support of this legislation: “Leading health and safety at work, Actions for directors, board members and organizations of all sizes” (HSE, 2013). Other guidance exists in the form of a Best Practice Guide from the Chemical Industries Association’s “Process safety leadership in the chemical industry” and the International Association of Oil and Gas Producers “Shaping safety culture through safety leadership” (OGP, 2013). Finally, the UK HSE has published an extensive review of the literature on effective leadership behaviors for safety (Health and Safety Executive, 2012).

In a review paper for leadership in healthcare, Flin and Yule (2004) present an excellent overview of some of the main findings generated by research into safety leadership behavior at the supervisor, middle management, and senior management levels, with a focus on Multifactor Leadership theory otherwise known as Transactional/Transformational Leadership theory (Bass, 1998; Bass & Avolio, 1990). Transactional leadership refers to the “exchange” type of relationship that normally occurs between an individual and their superior. In other words, the supervisor/manager set goals or targets, which are then rewarded when they are achieved (and perhaps punished when they are not). The most common styles of Transactional leadership are “Management by exception” (i.e., only intervening when goals/targets are not being achieved) and “Contingent reward” (reinforcement for achieving goals). Transformational leadership is believed to “augment” Transactional leadership and consists of four components:

1. Idealized influence—Transformational leaders act as role models for their followers, embodying vision into action.

2. Individualized consideration—Transformational leaders attend to each follower's needs, attending to individual strengths and development.

3. Inspirational motivation—Transformational leaders inspire followers to go beyond their level of comfort by linking purpose and meaning and driving people forward.

4. Intellectual stimulation—Transformational leaders encourage innovation and creativity as well as critical thinking and problem-solving skills among followers.

Bass and Avolio (1990) have demonstrated that Transformational leadership style improves employee job satisfaction, organizational commitment, and workplace performance, and recent research has demonstrated that it has also has a positive impact on the safety behavior of subordinates. Flin and Yule (2004) point out that the style of leadership for safety will vary according to management level within the organization, but ultimately both styles of leadership will be apparent depending on the situation. The following sections discuss safety leadership at the supervisory, middle management, and senior management levels.

6.2.1 Supervisors

Supervisors are in a difficult position as the individual “in the middle” facing more senior management and requirements to meet the strategic objectives of the organization on the one side and deploying their work teams and resources to achieve frontline goals on the other. As a result, supervisors face considerable challenges in the trade-off between “production and safety.” In one of the earliest studies of its kind, Andriessen (1978) concluded that supervisors have a direct influence on the safety motivation and safety behavior of the workforce, but they themselves will be influenced by their more senior managers; thus, senior management objectives and priorities will filter down through supervisors to the workforce level. Nonetheless, some research evidence suggests that supervisors can act as a “buffer” between senior management and their subordinates, protecting them from unreasonable or unachievable goals (Fleming, 2000). They do this by effective safety communication; valuing their subordinates and their contribution; making frequent visits to the worksites to encourage and support their subordinates rather than trying to “catch them out”; and allowing the work group to participate in decision making.

Experimental studies using worksite observations in the manufacturing sector have demonstrated how training in Transactional and Transformational leadership practices in supervisors leads to improved safety behavior in subordinates (Zohar, 2002; Zohar & Luria, 2003), a finding echoed in studies in the construction sector (Conchie, Moon, & Duncan, 2013; Mattila, Hyttinen, & Rantanen, 1994). Other studies have indicated how supervisory safety practices influence safety in the military (Hofmann, Morgeson, & Gerras, 2003) and hospitality sectors (Barling, Loughlin, & Kelloway, 2002).

Fuller and Vassie (2005) conducted a benchmarking study into the role of the supervisor in relation to health and safety performance in the chemical process industries. Their objective was to benchmark supervisory styles for both low- and high-risk activities in chemical sites of different sizes and functions. Data from a questionnaire completed by 84 sites in the UK chemical sector provided the basis for the study. The findings showed no causal relationship between supervisory methods and health and safety performance, and most supervisors were focused on compliance with health and safety legislation and risk control measures. The study did indicate however that operational responsibilities lay with management and organizational support was important for supervisory methods, particularly in the larger chemical organizations. This leads us on to the role of more senior managers in maintaining high safety performance in the chemical industries.

6.2.2 Middle Managers

There is surprisingly little research on the role of departmental or site leaders in safety (Flin & Yule, 2004). Part of the problem is that studies often do not clarify what level of “management” is being targeted, with the general term being used, which could mean supervisors, site managers, or senior managers. Most studies of management commitment to safety are related to safety climate and safety culture (see earlier) and therefore will not be covered in any further detail here.

O’Dea and Flin (2001) carried out one of the few studies of site level safety leadership in the UK offshore oil and gas industry. They surveyed 200 offshore installation managers (OIMs) across 157 offshore production platforms and drilling rigs to investigate their leadership style. O’Dea and Flin found that these OIMs reported a more “telling and selling” style, rather than a “participative, inspirational” style, even though the managers were aware that the latter style was probably preferable when it came to influencing safety performance. In another paper reported by O’Dea and Flin (2000) in the offshore industry, the relationship between workforce safety compliance and involvement perceived OIM commitment to safety and Transactional/Transformational Leadership style was reported. Transformational leadership was associated with more safety initiative among the platform workforce, whereas Transactional Leadership showed no effect.

In conclusion, it would appear that the role of “middle management” in leadership for safety has not been well delineated in research studies. Their influence has largely been captured through studies of safety climate, where workforce perceptions of management commitment to safety are the key influence on workplace safety climate. Since members of the workforce rarely have any contact with senior managers, i.e., CEOs and Board members, it can be assumed that any reference to management as opposed to supervisors in safety climate studies will be targeting site or departmental managers.

6.2.3 Senior Managers

Flin (2003) referred to senior managers as a “neglected species,” but recently, they have come very much under the spotlight. The legal requirements of senior managers are articulated at the start of this section, and as a result of the outcomes of major accident Public Inquiries, their key role in determining and reinforcing the safety culture of the organization and associated standards and expectations has become very apparent, e.g., Chernobyl (International Atomic Energy Authority, 1992), NASA (NASA, 2003), and Deepwater Horizon (National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, 2011). Although generally far removed from frontline operations, senior management attitudes, behaviors, decision making, and deployment of resources can have an impact on safety. They are effectively the “controlling mind” and subordinates will look to them for guidance on what is recognized and rewarded. It is acknowledged that senior managers have to balance a number of organizational goals, e.g., budgets, productivity, safety, but where MAHs can have devastating consequences if realized, senior management focus should be on safety. A study of 70 European chemical and petrochemical companies (Keller & Huwaishel, 1993) showed that only 23% reported safety as the top priority for management. As demonstrated by Fuller and Vassie's (2005) benchmarking study, management focus tends to be on compliance with health and safety legislation in order to prevent legal action being taken by the authorities.

Up until relatively recently, there has been very little research on senior management attitudes and behavior in relation to safety. Rundmo and Hale (2003) investigated attitudes toward safety and accident prevention in 210 presidents, vice presidents, and managers in a Norwegian petrochemical company. They found that high management commitment, low fatalism, high-risk awareness, and high safety priority were key attitudes. Research into senior leadership and safety in the UK petrochemical industry (Roger, 2013; Roger, Flin, & Mearns, 2011) led to the development of a six-category framework for key senior leadership functions:

1. Establishing safety as an organizational priority.

2. Establishing clear and open communication for safety.

3. Active involvement in safety activities.

4. Setting and maintaining safety standards.

5. Maintaining risk awareness.

6. Motivating and supporting the workforce.

In conclusion, senior managers should adopt a Transactional approach by ensuring compliance with regulatory requirements and providing resources for a comprehensive SMS. The research evidence also indicates the importance of a Transformational Leadership Style at senior management level demonstrated through visible and consistent commitment to safety, showing concern for people, encouraging participatory styles in their subordinates, and ensuring allocation of their time to safety issues (Flin & Yule, 2004).

7 General Conclusion

This chapter has covered a cross-section of the subdisciplines and areas that make up the discipline of Human Factors. The main role of Human Factors is to design, develop, and maintain systems of work that improve human efficiency but above all keep people, plant, and the environment healthy and safe. This applies to both occupational health and safety and process safety. Application of Human Factors begins at the design stage with decades of research and practitioner experience providing examples of standards, legislation, and good practice. It is generally accepted that good design and engineering are the best way to prevent process incidents; however, sometimes the costs associated with designing for humans are considered to be prohibitive for many companies and therefore human factors engineering is sometimes not implemented to a high enough standard. If designing for humans is not achieved, a second barrier to major incidents involves administrative controls such as procedures. Clearly written and usable procedures are a perquisite to a safe workplace, as is adequate training and regular assessment of competency. Understanding the organization and how people are managed provides another important barrier to preventing both process and occupational incidents. This can be achieved through safety climate and safety culture assessment, depending on whether the objective is to address underlying attitudes and beliefs or perceptions of how safety is being managed at the workplace. Ultimately, although everyone is responsible for safety, it is management who are accountable and that accountability has to be understood and complied with.

Apart from management understanding and executing their accountabilities, Human Factors should be integrated into the SMS. An SMS is unlikely to achieve its full potential to improve safety performance unless staff have a full understanding of Human Factors principles and are able to apply this understanding to support a positive safety culture. Human Factors should be considered as routinely as other important SMS activities such as risk assessment, cost–benefit analyses, and access to and deployment of resources. Human Factors principles can be incorporated into hazard identification and reducing risks to As Low As Reasonably Practicable (ALARP); designing systems, equipment, jobs, and tasks; staff training and competency assessment; and the management of change. At the end of the day, the SMS is only as good as the people that operate within it. To coin an idiom, understanding and applying good human factors practice is “where the rubber meets the road.” It is the moment of truth for many process industry organizations when it comes preventing major incidents.

References

American Psychological Association (APA, 2016). www.apa.org (accessed 23rd October, 2016).

Andriessen J. Safe behaviour and safety motivation. Journal of Occupational Accidents. 1978;1:363–376 [now Safety Science].

Barling J., Loughlin C., Kelloway E.K. Development and test of a model linking safety-specific transformational leadership and occupational safety. Journal of Applied Psychology. 2002;87(3):488–496.

Bass B. Transformational leadership. Mahwah, NJ: LEA; 1998.

Bass B., Avolio B. The implications of transactional and transformational leadership for individual, team and organizational development. Research in Organizational Change and Development. 1990;4:231–272.

Berg M., Shahriari M., Kines P. Occupational safety climate and shift work. Chemical Engineering Transactions. 2013;31:403–408.

Brown R.L., Holmes H. The use of a factor analytic procedure for assessing the validity of an employee safety climate model. Accident Analysis and Prevention. 1986;18(6):445–470.

BS EN ISO 11064 (Parts 1 to 6). (2000–2007). Ergonomic design of control centres.

Business Dictionary (2016). http://www.businessdictionary.com/definition/competence.html.

Cellar D.F., Yorke C.M., Nelson Z.C., Carroll K.A. Relationships between five factor personality variables, workplace accidents and self-efficacy. Psychological Reports. 2004;94(3 Pt. 2):1437–1441.

Center for Chemical Process Safety (CCPS). Guidelines for preventing human error in process safety. New York: American Institute of Chemical Engineers. Inter Science; 1994.

Chemical Industries Association. Process safety leadership in the chemicals industry—Best practice. London: CIA; 2008.

Christian M.S., Bradley J.C., Wallace J.C., Burke M.J. Workplace safety: A meta-analysis of the roles of person and situation factors. Journal of Applied Psychology. 2009;29(5):1103–1127.

Clarke S. Contrasting perceptual, attitudinal and dispositional approaches to accident involvement. Safety Science. 2006a;44:537–550.

Clarke S. The relationship between safety climate and safety performance: A meta-analytic review. Journal of Occupational Health Psychology. 2006b;11:315–327.

Clarke S., Robertson I. An examination of the role of personality in work accidents using meta-analysis. Applied Psychology. An International Review. 2008;57(1):94–108.

Conchie S., Moon S., Duncan M. Supervisors’ engagement in safety leadership: Factors that help and hinder. Safety Science. 2013;51(1):109–117.

Cooper D. Towards a model of safety culture. Safety Science. 2000;36:111–136.

Cox S., Cheyne A. Assessing safety culture in offshore environments. Safety Science. 2000;34:111–119.

Dedobbeleer N., Beland F. A safety climate measure for construction sites. Journal of Safety Research. 1991;22(2):97–103.

Donald I., Canter D. Employee attitudes and safety in the chemical industry. Journal of Loss Prevention in the Process Industries. 1994;7(3):203–208 Abstract only accessed.

Energy Institute. Guidance on human factors safety critical task analysis. London: Energy Institute; 2011.

Energy Institute. Guidance on quantified human reliability analysis (QHRA). London: Energy Institute; 2012.

Energy Institute. Guidance on crew resource management (CRM) and non-technical skills training. 1st ed. London: Energy Institute; 2014.

Engineering Equipment and Material Users Association (EEMUA). See https://www.eemua.org/tni/About-EEMUA/What-we-do/Standards-guides.aspx.

EEMUA 191. Alarm systems. A guide to design, management and procurement: Engineering equipment and materials users association publication 191. 2nd ed. EEMUA; 2007.

EEMUA 201. Process plant control desks utilising human-computer interfaces: A guide to design, operational and human-computer interface issues: Engineering equipment and materials users association publication no. 201. 2nd ed. EEMUA; 2009.

Fleming M. Effective supervisory leadership behaviour in the offshore oil industry. In: Institute of chemical engineers (IChemE), symposium series 147, paper 29; 2000.

Fletcher G., Flin R., McGeorge P., Glavin R., Maran N., Patey R. Anaesthetists’ non-technical skills (ANTS): Evaluation of a behavioural marker system. British Journal of Anaesthesia. 2003;90(5):580–588.

Flin R. Danger—Men at work. Human Factors and Ergonomics in Manufacturing. 2003;13:261–268.

Flin R., Maran N. Identifying and training non-technical skills for teams in acute medicine. Quality & Safety in Health Care. 2004;13:180–184.

Flin R., Martin L. Behavioural markers for crew resource management: A review of current practice. The International Journal of Aviation Psychology. 2001;11(1):95–118.

Flin R., Mearns K., O’Connor P., Bryden R. Measuring safety climate: Identifying the common features. Safety Science. 2000;34:177–193.

Flin R., O’Connor P., Mearns K. Crew resource management: Improving teamwork in high reliability industries. Team Performance Management: An International Journal. 2002;8(3/4):68–78.

Flin R., Yule S. Leadership for safety: Industrial experience. Quality and Safety in Healthcare. 2004;13(Suppl. II):ii45–ii51.

Fuller C., Vassie L. Benchmarking employee supervisory processes in the chemical industry: Research report 312. Norwich: HSE Books; 2005.

Guldenmund F.W. The nature if safety culture: A review of theory and research. Safety Science. 2000;34(1–3):215–257.

Hansen C.P. A causal model of the relationship among accidents, biodata, personality and cognitive factors. Journal of Applied Psychology. 1989;74(1):81–90.

Health and Safety Commission (HSC). Advisory committee on the safety of nuclear installations. Organising for safety. London: HSE Books; 1993.

Health and Safety Executive. Reducing error and influencing behaviour (HSG48). Suffolk: HSE Books; 1999.

Health and Safety Executive. The role of managerial leadership in determining safety outcomes: Research report 044. HM Stationary Office: Norwich; 2003.

Health and Safety Executive. Research Report RR679: Review of human reliability assessment methods. Norwich, UK: HSE Books; 2009.

Health and Safety Executive. A review of the literature on effective leadership behaviours for safety: Research report 952. Norwich, UK: HSE Books; 2012.

Health and Safety Executive (2013). Leading health and safety at work. Actions for directors, board members, business owners and organisations of all sizes. Industry Guidance 417 (rev 1). Leaflet available at www.hse.gove.uk/pubns/indg417.htm.

Health and Safety Executive (HSE). (2005). Core topic 3. Identifying human failures. Accessed from http://www.hse.gov.uk/humanfactors/topics/core3.pdf.

Health and Safety Executive (HSE). (2005). Core topic 4. Procedures. Briefing note 4. Accessed from http://www.hse.gov.uk/humanfactors/topics/core4.pdf.

Henderson J., Embrey D. Quantifying human reliability in risk assessments. Petroleum Review. 2012;30–34.

Hofmann D., Morgeson F., Gerras S. Climate as a moderator of the relationship between leader-member exchange and content specific citizenship: Safety climate as an exemplar. Journal of Applied Psychology. 2003;88:170–178.

Institute of Nuclear Power Operations. Control of room teamwork development training: Course administration and facilitation guide. Atlanta, GA: National Academy for Nuclear Training; 1993.

International Association of Oil and gas producers. Shaping safety culture through safety leadershipOGP report 45. 2013 October 2013.

International Association of Oil and Gas Producers. Crew resource management for well operations teams: Report 501. 2014 April 2014.

International Atomic Energy Authority. The Chernobyl accident: Updating of INSAG-1. A report by the international nuclear safety advisory group (INSAG)Safety Series No. 75-INSAG-7.24. 1992.

Keller A.Z., Huwaishel A.M. Top management attitude toward safety in the western European chemical and petrochemical industries. Disaster Prevention and Management. 1993;2:48–57.

Kirwan B. A guide to practical human reliability assessment. London: Taylor & Francis; 1994.

Kirwan B. The validation of three human reliability quantification techniques THERP, HEART and JHEDI: Part 1. Technique descriptions and validation issues. Applied Ergonomics. 1996;27(6):359–373.

Kirwan B., Ainsworth K. A guide to task analysis. , London, UK: Taylor and Francis; 1992.

Kirwan B., Kennedy R., Taylor-Adams S., Lambert B. The validation of three human reliability quantification techniques, THERP, HEART and JHEDI: Part II—Results of validation exercise. Applied Ergonomics. 1997;28(1):17–25.

Lee T.R. Assessment of safety culture in a nuclear reprocessing plant. Work and Stress. 1998;2:217–237.

Lee T.R., Harrison K. Assessing safety culture in nuclear power stations. Safety Science. 2000;34:61–97.

Mattila M., Hyttinen M., Rantanen E. Effective supervisory behaviour and safety on a building site. International Journal of Industrial Engineers. 1994;13:85–93.

Mearns K., Flin R., Gordon R., Fleming M. Measuring safety climate on offshore installations. Work and Stress. 1998;12:238–254.

Mearns K., Hope L. Health and well-being in the offshore environment: The management of personal health. 2005 HSE Research Report 305. Norwich, UK.

Mearns K., Whitaker S.M., Flin R. Safety climate, safety management practice and safety performance in offshore environments. Safety Science. 2003;41:641–680.

Michie S. Causes and management of stress at work. Occupational and Environmental Medicine. 2002;59:67–72.

Mitchell L., Flin R. Non-technical skills of the operating theatre scrub nurse: Literature review. Journal of Advanced Nursing. 2008;63:15–24.

NASA. United States: NASA; . Columbia accident investigation board. 2003;Vol. 1 [chapter 7].

National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling. Deepwater: The Gulf oil disaster and the future of offshore drilling. Report to the President 2011 [chapter 8].

Niskanen T. Safety climate in road administration. Safety Science. 1994;17:237–255.

Noyes J. Designing for humans. Hove, East Sussex, UK: Psychology Press; Taylor and Francis; 2001. Psychology at work series..

O'Connor P., Flin R. Crew Resource Management training for offshore oil production teams. Safety Science. 2003;41:591–609.

O’Dea A., Flin R. Site managers and safety leadership in the offshore oil and gas industry. In: Paper presented at the academy of management conference, August; 2000 [cited in Flin & Yule, 2004].

O’Dea A., Flin R. Site managers and safety leadership in the offshore oil and gas industry. Safety Science. 2001;37:39–57.

Oil and Gas Producers (OGP). Shaping safety culture through safety leadership: Report no. 452. London: International Association of Oil and Gas Producers; 2013.

Poussette A., Larsson S., Törner M. Safety climate cross-validation, strength and prediction of safety behaviour. Safety Science. 2008;46:398–404.

Pidgeon N., O'Leary M. Man-made disasters: Why technology and organizations (sometimes) fail. Safety Science. 2000;34:15–30.

Prior R. Top management behaviours—The determining role in changing safety culture. In: Institute of chemical engineers (IChemE), symposium series no. 149; 2003:733–744.

Reason J. Human error. Aldershot: Ashgate Books; 1991.

Reason J. Managing the risks of organizational accidents. Aldershot: Ashgate books; 1997.

Reason J. Achieving a safe culture: Theory and practice. Work and Stress. 1998;12:293–306.

Roger I. Safety leadership in the energy industry: The development and testing of a framework outlining Key behaviours of senior managers. Doctoral Thesis School of Psychology, University of Aberdeen; 2013.

Roger I., Flin R., Mearns K. Safety leadership from the top: Identifying the key behaviours. In: Proceedings of the human factors and ergonomics society 55th annual meeting. Las Vegas, USA; 2011.

Rundmo T. Risk perception and safety on offshore petroleum platforms—Part II: Perceived risk, job stress and accidents. Safety Science. 1992;15(1):53–68.

Rundmo T., Hale A. Managers’ attitudes towards safety and accident prevention. Safety Science. 2003;41:557–574.

Seo D.C., Torabi M.R., Blair E.H., Ellis N.T. A cross-validation of safety climate scales using a confirmatory factor analytic approach. Journal of Safety Research. 2004;35(4):427–445.

Shepherd A. Hierarchical task analysis. London and New York: Taylor and Francis; 2001.

Sian I.B., Robertson M., Watson J. Maintenance resource management handbook. Washington DC: Federal Aviation Authority; 2016.

SINTEF. (2004). Industrial Management Safety and Reliability. CRIOP®: A scenario method for Crisis Intervention and Operability Analysis. Trondheim, Norway.

STCW. International convention on standards of training, certification and watchkeeping for seafarers (the STCW Convention), and its associated code. In: Diplomatic conference in Manila, the Philippines, 21–25; June 2010 2011.

Sutherland V.J., Cooper C.L. Man and accidents offshore: The costs of stress among workers on oil and gas rigs. London: Lloyd's List/Dietsmann; 1986.

Sutherland V.J., Cooper C.L. Personality, stress and accidents in the offshore oil and gas industry. Personality and Individual Differences. 1991;12:195–204.

Sutherland V.J., Cooper C.L. Stress in the offshore oil and gas exploration and production industries: An organizational approach to stress control. Stress Medicine. 1996;12:27–34.

Swain A.D., Guttman H.E. Handbook of human reliability analysis with emphasis on nuclear power plant operations. Washington, DC: US Nuclear Regulatory Commission; 1983 NUREG/CR-1278.

US Chemical Safety Board Report. Refinery explosion and fire. BP Texas City, Texas: Report no. 205-04-I-TX. 2007.

Vinodkumar M.N., Bhasi M. Safety climate factors and its relationship with accidents and personal attributes in the chemical industry. Safety Science. 2009;47:659–667.

Visscher G. Some observations about major chemical accidents from recent CBI Investigations. In: Paper presented at the IChemE XX Hazards symposium, symposium series 154; 2008.

Williams J. HEART—A proposed method for achieving high reliability in process operations by means of human factors engineering technology. In: Proceedings of a symposium on the achievement of reliability in operating plant, safety and reliability society, 16th September 1985, Southport; 1985 [cited in HSE RR679, 2009].

Yule S., Flin R., Paterson-Brown S., Maran N. Non-technical skills for surgeons. A review of the literature. Surgery. 2006;139:140–149.

Yule S., Flin R., Maran N., Youngson G., Rowley D., Paterson-Brown S. Surgeons’ non-technical skills in the operating room: Reliability testing of the NOTSS behaviour rating system. World Journal of Surgery. 2008;32:548–556.

Zohar D. Safety climate in industrial organizations: Theoretical and applied implications. Journal of Applied Psychology. 1980;65:96–102.

Zohar D. Modifying supervisory practices to improve sun-unit safety: A leadership-based intervention model. Journal of Applied Psychology. 2002;87:156–163.

Zohar D. Thirty years of safety climate research: Reflections and future directions. Accident; Analysis and Prevention. 2010;42:1517–1522.

Zohar D., Luria G. The use of supervisory practices as leverage to improve safety behaviour: A cross-level intervention model. Journal of Safety Research. 2003;34:567–577.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.187.101