8
The Impact of Human Stability on Human–Machine Systems: the Case of the Rail Transport

8.1. Introduction

The reliability of automated systems has been an essential constraint for researchers since the emergence of automation. Human–machine systems (HMS) are automated systems with the particularity of having one or more human operators in the loop. One of the key roles of the operator is to increase the resilience of the technological processes and the human–machine system in general, by relying on their (human) adaptability and improvisation. Resilience [HOL 06] is the ability of the system to contain and to adapt to the consequences of unforeseen and unpredictable situations, be them simple or catastrophic failures. The concept of resilience differs from the well-known concept of robustness, which is the ability of the system to work under different operating conditions, including failures that can a priori be foreseen. Therefore, resilience has an important impact on the risk management of human–machine systems.

Transport systems are an important type of human–machine systems, and the incidents and accidents related to this type of systems are a highly sensitive issue. Using technological progress, the risk of accidents caused by technical factors is twice smaller than the risk of accidents involving the human factor [COT 07]. Therefore, it is clear that in order to improve the resilience of rail transport systems, it is necessary to focus on the human factor from the starting step of the HMS design.

A possible approach for considering potential errors of the human operator is based on the assumption that an operator in good working conditions makes fewer mistakes than an overloaded operator under stressful conditions. This hypothesis is an interpretation stemming from the study of Millot’s model for regulating human activity [MIL 99].

Thus, the notion of human stability is introduced, by analogy with the stability of a closed-loop automatic system, which points to the condition of the operator and their ability to avoid making mistakes. The notion of human stability for human–machine systems, discussed in previous works [RIC 10, VAN 17], is deepened and its relation to resilience is studied. Here, human stability is defined in the context of managing a rail transport system – train or tramway – and validated through small-scale simulator experiments.

This chapter seeks to extend the notions associated with stability in the sense of automation, namely the notion of point of equilibrium and stabilizability for HMS design. It also aims to study the factors that influence stability, as well as those revealing a change in the status of the system (transition from stability to instability). In particular, it intends to study the influence of a (human or technological) supervisory system and its contribution to improve the resilience of the human–machine system.

This chapter is structured as follows: section 8.2 introduces the different notions associated with human stability, especially its meaning, the relationship with resilience and the notion of stabilizability, as well as the factors contributing to it. Section 8.3 describes the structure of a stable HMS. Section 8.4 illustrates the chapter’s statements with experimental results obtained from the LAMIH COR&GEST small-scale railway platform at the ERGOIA 2011 workshop. A discussion is later held regarding the interest of the suggested approach. The chapter concludes with a summary of the obtained results and a presentation of future developments.

8.2. Stability and associated notions

This section is dedicated to the study of the main concepts that we will discuss in our research project. We will introduce the notions of resilience and stability in the technological, human and HMS sense, as well as the notions of stabilizability, together with the driver’s potential of action and reaction, called PAR.

8.2.1. Resilience

The concept of resilience has been addressed in different scientific communities, especially by psychology [GOU 08] and by automation [ENJ 17, NUM 06, VAN 17, ZIE 07, ZIE 10, ZIE 11]. In psychology, this concept is defined as the ability to resist trauma. In engineering, there are several approaches: resilience is associated with fault and error tolerance within the theory of dependability. In elasticity theory, resilience is interpreted as impact resistance.

In the context of this chapter, resilience refers to the ability of a human, or a system including a human, to recover or to adapt to external aggressions or disturbances. It can be a physical or psychological shock, an error, a fault or a violation. In the case of errors or mistakes, several approaches exist [CHE 07, NAK 07] which address the system’s resistance to failure, the recovery from common failures (or the minimization of failure effects or their possible propagation) and as a reconfiguration of the system in order to continue to deliver the required service.

The notion of resilience also includes resistance and recovery from unforeseen situations, that is, from events whose occurrence was not anticipated during the design of the system for various reasons: a very low probability of occurrence, extraordinary operating conditions, etc. In this case, the resilience of the system is largely dependent on the HMS’s human operator ability to improvise and to take the initiative. It is even considered that a complex system cannot minimize the risk of a catastrophic accident without the presence of a human operator, who is the last guarantor of security during a crisis [AMA 96].

8.2.2. Stability within the technological context

The stability of technological systems refers to the ability of such systems to converge towards the so-called “stable” state in the absence of external excitation. By the absence of external excitation, we refer to the absence of a control signal or a set point, which is known as the “rest state” of the system. Thus, a stable system is a system tending to rest towards a finite value, known as value (or point) of rest or equilibrium, and which is often – but not always – equal to zero. A system may have multiple rest values.

The notion of stability is a notion that has been known and explored since the advent of automation as a science. The work of Lyapunov [LYA 92] laid down the main criteria for gauging system stability and delivered the following formulation of equilibrium stability: if all the evolutionary trajectories of a system starting around an X point stay around this X point, then X is stable in the sense of Lyapunov. Moreover, if all trajectories converge towards X, then X is asymptotically stable.

8.2.3. Mathematical definition of stability in the sense of Lyapunov

Given an autonomous system f:D → Rn x = f(x), where the function is assumed as locally Lipschitz on DRn, we assume that origin x = 0 is the equilibrium point of the system, satisfying f(0) = 0 ⇒ x = 0 ⊂ D.

The equilibrium point of the system is:

  • – stable in the sense of Lyapunov if ∀ > 0, ∃δ(∊) > 0 so that: ||x(0)|| < δ ⇒ ||x(t)|| < ∊, ∀t < 0;
  • – unstable if it is not stable;
  • – asymptotically stable if the point is stable and ||x(0)|| < δ ⇒ ||x(t)|| → 0, when t → ∞
  • – exponentially stable if the point is stable and ∃ α, β > 0 that ||x(t|| < α ||x(0)||e-βt, ∀t < 0.

8.2.3.1. Remarks

  • – An autonomous system is a system whose laws are invariant in time.
  • – A Lipschitz function is a function whose rate of evolution is smaller than a constant value, known as Lipschitz constant value [KHA 96].
  • – Value δ delineates the zone of attraction of the stable equilibrium point.
  • – If the zone of attraction of an equilibrium point is zero, then this equilibrium point is unstable.
  • – Exponential stability characterizes the speed of convergence of the system and we will not focus on it here.

8.2.3.2. Discussion

A system can only be stable if it has stable equilibrium points and if its rate of evolution is limited. In the absence of constraints (control or set point), a stable system manages to absorb the deviations from its stable equilibrium point and may even manage to correct them if stability is asymptotic.

A control law only translates the stable equilibrium point of the system (stable) to a value desired by the operator controlling the system. Therefore, a stable system will be able to contain, or even correct, the deviations suffered during its operation and stay at the desired set value. This compensation capacity is directly related to the zone of attraction of the equilibrium point of the system and, consequently, on its dynamics.

In order to be able to gauge the stability of a system, Lyapunov suggested an algebraic evaluation criterion [LYA 92]. It is introduced below for reference only.

8.2.4. Lyapunov’s theorem

If there is a so-called Lyapunov V(x):RnR function, such that:

  • – ∃V1, V2: R+R+ non-decreasing, so that V1 (||x||) < V (x) < V2(||x||);
  • – ∃V3: R+R+ non-decreasing, and V3(s) > 0, ∀s > 0, so that V(x(t)) > -V3(||x(t)||);

then the system is trivially asymptotically stable.

8.2.4.1. Discussion

The interest of this theorem is to be able to rule over the stability of a system as soon as its evolution is compatible with a specific “template”. The second particular condition determines whether the studied system possesses a stabilizing dynamic, which is also invariant in time. Therefore, a stability criterion should be defined by the quantifiable and invariant variation compensation potential. Thus, a resilient system is naturally stable.

8.3. Stability in the human context

Here, we will offer a definition of stability within a purely human context.

8.3.1. Definition of human stability

Human stability is the ability of the human to maintain certain mental and physiological conditions in a situation without any order or task to be performed, as long as the external disturbances to which it is subjected do not exceed a given threshold.

These states are called mental and physiological states of equilibrium, respectively. Each state is associated with an area of attraction.

8.3.1.1. Hypothesis

It is generally assumed that a healthy human being has at least one state of equilibrium for each aspect: mental and physiological.

Compared to Lyapunov’s definition of stability, here human stability concerns the capacity of the human to fulfill a certain assignment following its capacity to contain the effects of external disturbances. The assignment only translates the equilibrium point, without calling stability into question, which is an intrinsic notion.

8.3.1.2. Remarks

There are other definitions of this type of stability [RIC 10]. However, it is wise to put forward a definition that differentiates between the guideline or order given to the human and their capacity to accomplish it, and the human potential to resist or to compensate for an external disturbance element.

The question now is how to be able to determine the stability or instability of human equilibrium points. Moreover, it is known that the characteristics of humans are not invariant over time, particularly during periods of intense solicitation [CAB 92], whether throughout the duration of an assignment or in the medium term.

It is therefore utopian to speak of an overall human stability. In fact, the only possibility of verifying that equilibrium points are stable would be to perform regular medical and prior psychological examinations. The validity of these examinations could only be temporary, as the duration of validity depends heavily on the workload, as well as on experienced stressful situations. Sometimes, certain traumas will permanently alter the psychological and psychological equilibrium of a person. Therefore, it might be necessary to limit the time window considered for assessing the invariance of the characteristics of the human under scrutiny within the framework of a single assignment. Otherwise, a stable equilibrium state might turn into an unstable state, if the duration of the assignment exceeds a reasonable length of time [CAB 92].

Therefore, a more realistic approach to the problem would be to suggest a notion of momentary stability. Each person would then be associated with a potential of action and reaction (PAR) depending on their intrinsic qualities (strength, resistance, etc.), their training and mastery of the assignment performed and finally their momentary emotional state. PAR can be interpreted as a quantification of resilience, but is not limited to the adjustment defects as such. It tends to decrease depending on the workload carried out, the assignment’s environment and its duration. It decreases very quickly during the management of extraordinary situations (crises, unforeseen events, increased responsibility, etc.).

8.3.2. Definition of the potential of action and reaction

Potential of action and reaction is a quantitative parameter that describes a person’s ability to respond to anticipated or unexpected external demands.

It is a contextualized parameter related to a specific assignment. The PAR has a direct influence on the attraction of a person’s equilibrium points. The higher the PAR, the better the person will be able to handle the tasks to be performed, as well as any expected or unexpected situations, without the risk of making errors. On the contrary, if the PAR decreases to a certain minimum, the zone of attraction of the equilibrium points will be drastically reduced, which will de facto turn stable equilibrium points into unstable equilibrium points. In that case, the human might lose their capacity for resilience.

8.4. Stabilizability

Not all existing systems are inherently stable. Nevertheless, some systems are used with particular operating constraints: the use of the so-called stabilizing control. This stabilizing control law makes it possible to maintain the evolutionary trajectories of the system parameters around unstable equilibrium points, without an inherent attraction zone. This makes the new system, including the system and the controller with the stabilizing control, factually stable. This stabilizing control can only exist in the case of a closed-loop system, within a regulation structure (Figure 8.1).

image

Figure 8.1. Regulation loop: closed-loop control system

Clearly, the controller must be sufficiently reliable so as to ensure that an unstable system does not diverge during the execution of the assignment. In addition, not all systems are necessarily stabilizable. In order to keep a system around its unstable equilibrium points, it is necessary for the system to be controllable, that is, it must be possible to reach any state of the system, from any other original state, using an appropriate control law, and within a specific time frame [BRO 83]. This controllability property is a structural property of the system that is defined depending on the physical limitations of such a system in relation to the controller.

In general, any system, be it technological or human, has limitations; hence the local controllability is used, around the equilibrium points, adapted to the goals of an assignment.

Going back to the notion of PAR and human stability, stabilizability is interpreted as the orders or information that the operator receives in view of increasing their PAR. This can be an alarm or the announcement of a state of emergency, which mobilizes the human operators and helps them to anticipate an extraordinary situation.

8.5. Stability within the context of HMS

The stability of HMS significantly differs from human stability in that it is mainly related to the execution of a predefined task within the context of the HMS, in the presence of a technological system with its own stability properties and in the presence of ancillary technological elements influencing the stability of the human operator.

While the technological system is generally stable or stabilizable, unforeseen situations may occur, and the embedded controller ends up not being enough. In this context, the HMS operator contributes to the stabilization of the technological system in unforeseen situations, be them dangerous or not, in addition to inherent technologically stabilizing control. Thus, the system’s resilience is greatly improved [AMA 96].

image

Figure 8.2. Constituents and characteristic parameters of the HMS in the context of an assignment

The actors and factors influencing the HMS within the context of a specific assignment are shown in Figure 8.2. A first contribution to the stability of the system lies in the design of the technological system and the communication interface between the operator and the machine, adapting its tools to human behavior [MIL 99]. The ergonomic design of a human–machine interface (HMI) can help to limit PAR decrease during the length of the assignment by reducing physical, sensory and mental fatigue.

The “stabilization” of the human operator can be achieved by adding a supervision module, which acquires HMS information by using on-board sensors for data acquisition regarding the actions and status of the operator, the state and the parameters of the technological system and the environmental context. If a dangerous situation is detected, the supervision module triggers an alert. A first discussion on this type of “supervisor” was engaged in [BER 11].

Another stabilizing factor of the HMS would be to include a hierarchical instance that would play the role of a regulator within the regulation loop (Figure 8.3). This body would be made up of one or more human operators who would supervise the HMS in case the need for such supervision emerges. The supervision module would emit an alert concerning the appearance of this situation.

image

Figure 8.3. Closed-loop human–machine system

8.6. Structure of the HMS in the railway context

8.6.1. General structure

The studied HMS includes the human operator, the HMI, the rail transport system, the supervisor and all sensors and auxiliary computers. The system has three “regulatory closed loops”: a closed loop including the technological system and its regulator, a closed loop including the human operator as controller of the technological system and finally a closed loop including the HMS and the higher supervisory authority, such as the control center or the so-called PCC, “Poste de Commande Centralisé” in French. The fourth closed loop of the system is an HMSinternal supervisory loop including the human operator and the supervision module. The expected structure of the system is shown in Figure 8.4.

image

Figure 8.4. The HMS structure. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

8.6.2. The supervision module

The supervision module is introduced in Figure 8.5. This module is an essential HMS stabilizing factor due to its role as alert provider. Its operating principle is to compare the sequences of measured data and the sequences obtained by simulation. The BCD block is built following the principle of the benefit–cost–deficit model [NPV 11]. It is used for estimating the amount of uncertainty in the system and for propagating this estimate to the different functional blocks. In order to simulate sequences for comparison, it is necessary to model the HMS.

image

Figure 8.5. Supervision module

8.6.3. The technological system model

The technological system is a rail transport system. It is modeled by a hybrid automaton, formed by a finite state machine (FSM) describing the transitions between the different modes of operation and their continuous representations. An operating regime of this system is modeled in continuous time by a state space dynamical model:

images

where xRn represents the status of the system, uRm represents the control law, yRl represents measurable variables and functions f and g are, respectively, the state and output functions:

images

In the context of this study, the variables of interest are the movement speed of the train and the internal operating parameters of the vehicle, which make it possible to determine the corresponding operating speed. In this chapter, we will only consider speed, since the operating regime does not change in the scenario under study

8.6.4. The human operator model

The human operator model was introduced in [BER 11] and is shown in Figure 8.6.

image

Figure 8.6. Human operator model

It is made up of two non-deterministic finite-state automata (for instance, hidden Markov chains) [CAR 12]. The emotional model is an initial FSM describing the considered set of emotional states, whereas the behavioral model is a second FSM describing the various possible actions of the operator. Figure 8.7 and Table 8.1 show a modeling example of this type.

image

Figure 8.7. Modeling example

Table 8.1. Probability of transitions between states

Probability of transition to new condition (sleepy/balanced/nervous)
Current state 1 Deceleration 2 Speed/lane monitoring 3 Other states
1 Deceleration 0.9/0.25/0.4 0.05/0.7/0.3 0.05/0.05/0.3
2 Speed/lane monitoring 0.05/0.7/0.4 0.9/0.25/0.3 0.05/0.05/0.3
3 Other states 0.01/0.45/0.3 0.01/0.45/0.3 0.98/0.1/0.4

8.7. Illustrative example

8.7.1. Experimental protocol

The experiment was carried out on the COR&GEST small-scale railway simulation platform (Figure 8.8(a)). The platform is made up of a reduced railway model, with a cabin for the rail driver and a supervisory position. The driving interface (HMI) is shown in Figure 8.8(b).

image

Figure 8.8. The COR&GEST platform

For the purposes of the tests, the driving position was equipped with Tobii eye sensors and a Face Reader face recognition system featured by Noldus. The eye-tracking system made it possible to follow the direction of the driver’s gaze on the projection of the interface in real time (red dots in Figure 8.9(a)).

The facial recognition system made it possible to estimate the similarity between the driver’s facial expression and the six state-of-the-art facial expressions. The driver could be qualified as: neutral, happy, angry, frightened, disgusted, sad or surprised (Figure 8.9(b)). Each hypothesis was associated with a degree of likelihood.

The experiment was conducted on a group of non-expert subjects who drove the vehicle on the platform in a scenario with unforeseen faults: door lock and brake failure. The constraints of the scenario were to respect speed signaling (speed limitation by sector) and mandatory stops at stations. To avoid a decrease in the PAR, which might distort the results and considering the degree of knowledge of the subjects concerning rail driving, the scenario lasted 15 minutes.

image

Figure 8.9. Example of sensor’s outputs. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

image

Figure 8.10. Evolution of the train speed compared to the instructions given. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

image

Figure 8.11. Occurrence of faults

image

Figure 8.12. Driver’s estimated emotions

image

Figure 8.13(a). Horizontal movement of the gaze

image

Figure 8.13(b). Horizontal movement of the gaze

8.7.2. Experimental results

Figure 8.10 shows the evolution of train speed during the driving scenario for subject nos. 3 and 4 and the imposed speed limitations. Let us observe that speed limitations appear as signals (traffic signs) on the driving interface. Subject no. 3 did not experience any failures, while subject no. 4 was confronted with two door-related faults and a brief brake anomaly (Figure 8.11).

Subject no. 3 is presented here as a reference, while the study focuses on the follow-up of subject no. 4. Figure 8.12 shows the output of the facial recognition system; a 7th state is added to the six classical states, corresponding to the failure to recognize the facial expression.

Figure 8.13 shows the horizontal evolution of the eye of subject no. 4 over four short periods of time during the scenario: Figure 8.13 (i) corresponds to the beginning of the scenario, and Figure 8.13 (iv) corresponds to the end of the scenario. On the vertical axis, the direction of gaze is given by an X coordinate expressed in pixels on the screen. The null value corresponds to a measurement failure, caused by an obstruction, a sudden movement of the head or simply a very fast movement of the eyes.

8.7.3. Remarks and discussion

When comparing the driving performances between subjects 3 and 4 (Figure 8.10), their superficial knowledge of the railway driving system should be taken into account. Due to this relative and contextual cognitive deficiency, their PARs significantly dropped during the scenario.

Subject no. 3 managed to follow the speed instruction fairly satisfactorily over most of the course. The only unsatisfactory phases were at times 15:34 and 15:36, at the beginning of the course. Their performances remained relatively constant throughout the scenario.

Subject no. 4 showed a different behavior. The subject had a tendency to overreact during acceleration (17:04, 17:05 and 17:08) and braking (17:06 and 17:09). It is also clear that this trend was exacerbated halfway through the scenario.

There was clearly a change in the behavior of subject no. 4 after experiencing failure situations. Drawing a parallel with the fault occurrences (Figure 8.11, at times 17:06, 17:09 and 17:11), it appears that the door defects and the associated alarm caused an amplification of the natural tendency of the subject to overreact. The most serious failure (brake failure) had a smaller effect on driving. Perhaps the cognitive deficiency of the subject did not allow them to understand the consequences of this fault. On the other hand, the long duration of the faults and the alarm disturbed them.

At the first occurrence of failure, at 17:06, the subject was surprised (Figure 8.12). Results present a higher frequency of the subject expressing fear, disgust and anger, all three proof of a negative state of mind of the driver, who was assessing their own performance at that moment. On the other hand, it is clear that the measurement of the facial expression is not very reliable. States change too fast to expect a real-time assessment of the driver’s emotional state. A certain logic appears if the measurements are taken over a larger time window, which allows us, if necessary, to confirm the “negative” emotional state of the subject.

Eye-tracking measurements, on the other hand, offered better results (Figure 8.13). The subject started the scenario with a calm state of the mind, their eye movements being quite contained and slow, as shown in Figure 8.13(a) and (b). Following the fault that caught their visual attention (Figure 8.13 (ii) 17:06), the gaze of the subject began to be periodically focused on the speed indicator, and sometimes on the alert indicator. Then, from 17:08, we can see a disruption, due to the overrunning and stopping of the engine, which lasted until the subject was able to calm down.

Another disruption occurs when the second fault appeared, which attracted the driver’s attention (Figure 8.13 (iii)) and altered their state of mind, since a difference in the speed of evolutions between Figures 8.13 (iii) and (iv) can be identified.

The results of the experiments indicate that the quantitative measurements on the driver are more accurate than qualitative measures. However, the analysis of the driver’s behavior can help us detect the occurrence of an event that is not directly observable. From this perspective, the use of the model based on FSMs to detect certain types of behavior is justified, since it gives us the most reliable indication.

In addition to the factors that intuitively influence the system’s stability, such as the duration of the assignment or the occurrence of unforeseen events, less intuitive factors emerged, including the drivers’ self-assessment concerning their own performance, which is a psychological criterion. The resulting sense of disappointment contributed to significantly reducing the driver’s PAR. The comparison between the behaviors of subject nos. 3 and 4 shows that this factor is not negligible even during brief assignments.

8.8. Conclusion

In this chapter, definitions of stability and stabilizability were proposed, as well as suggestions for the design of stable and, therefore, resilient human–machine systems. This design is based on a closed-loop structure. A parameter for assessing such stability was presented: the driver’s PAR (Potential for Action and Reaction), which must be gauged during the assignment using a specific supervision module. Stability indicators were also studied; the eye sensor was found to be superior to the facial recognition module. Finally, psychological factors, such as negative self-assessment, may lead to the instability of the human–machine system.

In the short term, an experimental database of a group of professional drivers during 2-hour experiments will be studied. The results of this study will validate the concept of a supervision module, which, so far, has not been tested under realistic conditions.

8.9. References

[AMA 96] AMALBERTI R., La conduite des systèmes à risque, PUF, Paris, 1996.

[BER 11] BERDJAG D., CAULIER P., VANDERHAEGEN F., “New challenges for the multi-criteria and multi-objective diagnosis of human-machine systems”, IFAC Workshop on Human-Machine Systems, Berlin, Germany, October 2011.

[BRO 83] BROCKETT R.W., “Asymptotic stability and feedback stabilization”, in MILMANN R.S. (ed.), Differential Geometric Control Theory, pp. 181–191, Birkhäuser, Basel, 1983.

[CAB 92] CABON P., Maintien de la vigilance et gestion du sommeil dans les systèmes automatisés : recherche de laboratoire applications aux transports ferroviaires et aériens, PhD thesis, University of Paris V, 1992.

[CHE 07] CHEN C.M., LIN C.W., CHEN Y.C., “Adaptive error-resilience transcoding using prioritized intra-refresh for video multicast over wireless networks”, Signal Processing: Image and Communication, vol. 22, pp. 277–297, 2007.

[COT 07] COTHEN G.C., Role of human factors in rail accidents, Report, Federal Railroad Administration, available at: https://www.transportation.gov/content/role-human-factors-rail-accidents, March 2007.

[ENJ 17] ENJALBERT S., VANDERHAEGEN F., “A hybrid reinforced learning system to estimate resilience indicators”, Engineering Applications of Artificial Intelligence, vol. 64, pp. 295–301, 2017.

[GOU 08] GOUSSé V., “Apport de la génétique dans les études sur la résilience : l’exemple de l’autisme”, Annales Médico-psychologiques, revue psychiatrique, vol. 166, no. 7, pp. 523–527, 2008.

[HOL 06] HOLLNAGEL P., “Resilience – the challenge of the unstable”, in WOODS D.D., HOLLNAGEL P. (eds), Resilience Engineering – Concepts and Precepts, CRC Press, Boca Raton, pp. 9–17, 2006.

[KHA 96] KHALIL H., Nonlinear Systems, Prentice Hall, Upper Saddle River, 1996.

[LYA 92] LYAPUNOV A.M., The general problem about the stability of motion, PhD thesis, University of Kharkov, Ukraine, 1892.

[MIL 99] MILLOT P., “Systèmes Homme-Machine et Automatique”, Journées Doctorales de l’Automatique JDA’99, Nancy, France, 1999.

[NAK 07] NAKAYAMA H., ANSARI N., JAMALIPOUR A. et al., “Fault-resilient sensing in wireless sensor networks”, Computer Communication, vol. 30, pp. 2375–2384, 2007.

[NUM 06] NUMANOGLU T., TAVLI B., HEINZELMAN W., “Energy efficiency and error resilience in coordinated and non-coordinated medium access control protocols”, Computer Communications, vol. 29, pp. 3493–3506, 2006.

[RAC 12] RACHEDI N.D., BERDJAG D., VANDERHAEGEN F., “Détection de l’état d’un opérateur humain dans le contexte de la conduite ferroviaire”, Lambda Mu 18, Tours, France, 2012.

[RIC 10] RICHARD P., BENARD V., VANDERHAEGEN F. et al., “Vers le concept de stabilité humaine pour l’amélioration de la sécurité des transports”, 17e congrès de maîtrise des risques et de sûreté de fonctionnement, La Rochelle, France, 2010.

[VAN 11] VANDERHAEGEN F., ZIEBA S., ENJALBERT S. et al., “A benefit/cost/deficit (BCD) model for learning from human errors”, Reliability Engineering and System Safety, vol. 96, no. 7, pp. 57–766, 2011.

[VAN 17] VANDERHAEGEN F., “Towards increased systems resilience: new challenges based on dissonance control for human reliability in Cyber-Physical & Human Systems”, Annual Reviews in Control, vol. 44, pp. 316–322, 2017.

[ZIE 07] ZIEBA S., JOUGLET D., POLET P. et al., “Resilience and affordances: perspectives for human-robot cooperation?”, 26th European Annual Conference on Human Decision Making and Manual Control, Copenhagen, Denmark, 21–22 June 2007.

[ZIE 10] ZIEBA S., POLET P., VANDERHAEGEN F. et al., “Principles of adjustable autonomy: a framework for resilient human machine cooperation”, Cognition, Technology and Work, vol. 12, no. 3, pp. 193–203, 2010.

[ZIE 11] ZIEBA S., POLET P., VANDERHAEGEN F., “Using adjustable autonomy and human-machine cooperation for the resilience of a human-machine system: application to a ground robotic system”, Information Sciences, vol. 181, pp. 379–397, 2011.

Chapter written by Denis BERDJAG and Frédéric VANDERHAEGEN.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.68.28