4
The Design of an Interface According to Principles of Transparency

4.1. Introduction

From assembly operations in industry to transport domains, automation is causing profound changes to the paradigm of task performance. Now, technical agents are more and more involved in completing tasks, and their role can vary from the simple accompaniment of the human operator in completing specific tasks, to total and autonomous completion of other tasks. In recent years, driving an automobile has also been significantly modified by this automation, which can be partial or total [NAT 14]. Several prototypes of automated cars have already been tested in the United States and in Japan, as well as in Europe [MEY 14, TRI 14]. Promises about autonomous driving abound: improved road safety, reduced traffic congestion, more comfort for the human agent and improved mobility in a context of demographic change [MEY 14]. While the development of autonomous cars has been accelerated by technological advances in recent years, the “human factors” aspect must nevertheless be taken into account. Indeed, in her article “Ironies of automation”, Bainbridge [BAI 83] underlines what can be perceived as a sword of Damocles: a high degree of automation is seen as desirable because it would mitigate human failures, but, at the same time, we ask human agents to be able to take back control in order to manage difficult or unforeseen situations for which automatisms have not been designed. “Irony” resides in the fact that the more a system is automatized, the more the contribution made by the human operator becomes critical! Other problems, pointed out by authors such as Parasuraman, Sheridan and Wickens [PAR 00], can occur. In manual mode, the human agent is in charge of the entire task of driving. He/she provides perception of the driving environment and controls the actuators of his/her vehicle. With automation, the task of driving is no longer only provided by the human agent: the technical agent (or controller) will participate partially or totally in the completion of the task depending on their degree of automation. In cars that are totally autonomous or autonomous under particular conditions, the human agent will then be able to do other tasks that are not related to driving (Non-Driving Related Tasks or NDRT). Abandonment of the driving task, as allowed by automation, may have no major consequences in certain cases, but may also lead to fatal accidents in others. This was the case, for example, on May 7, 2016, when an accident involving a Tesla and a heavy goods vehicle caused the death of the driver [SOL 16]. On board the Tesla S model, the Autopilot system, which provides semi-autonomous functions, was in fact activated. This accident occurred at 119 km/h on a motorway in Florida. As specified by Bryan Thomas, spokesperson for the National Highway Traffic System Administration (NHTSA), “the enquiry has found no faults with the software” [NAT 14]. The human agent, sitting on the driving seat, is believed to have been distracted for unknown reasons. The “human factors” aspect therefore plays an important role in the design of autonomous cars, not only to guarantee the safety of the human agent, but also to guarantee acceptance of these cars by the public. This is the reason why this aspect is taken into account by most automobile manufacturers around the world, such as Volvo, Mercedes and BMW. In France, the Renault group is not to be outdone. They are interested in particular in eyes off/hands off systems in which the human agent no longer necessarily needs to look at the road or have their hands on the steering wheel. Supervision by the human agent is therefore no longer required here. In order to design interfaces that guarantee the level of security desired by the group for eyes off/hand off systems, Renault deploys projects both internally within the group as well as externally. For example, this is the case for the localization–augmented reality (LAR) project that the research we present in this chapter is a part of. This project involves designing interfaces that will make it easier to understand the operation of the system and will encourage (re)construction of the human operator’s situational awareness when taking back manual control in an SAE level 4 car. To do this, we have suggested principles of transparency as a means of directing the design, on the basis of Lyons’ models [LYO 13]. In order to evaluate the relevance of these principles, we have carried out experiments in a static driving simulator.

In this chapter, we will first set out definitions of some notions. We will insist in particular on the notions of situational awareness and transparency which were previously introduced. Then, we will present the approach that we have used not only to define the principles of transparency but also to specify the content of interfaces.

4.2. State of the art

In this section, we seek to clarify the main terms used in our research: the situational awareness regarding the understanding that the human agent must have of their driving environment, and the transparency regarding the understanding that the human agent must have of the controller’s operation.

4.2.1. Situational awareness

The situational awareness (SA) of an individual, as defined by Endsley [END 95], is made up of three levels:

  • level 1: perception of the elements in the environment. At this level, the information is not interpreted. Information is simply received in a raw, unprocessed format. This level contains information about the states, attributes and the dynamic of the elements that are relevant to the environment;
  • level 2: understanding of the current situation. This level intervenes after perception, as soon as the data can be integrated into existing frameworks. Information processing, which implies understanding, consists of putting the characteristics of the ongoing situation into correspondence with frameworks in the memory, which represent prototype situations. This stage of understanding is important in apprehending the significance of the elements seen at the previous level, by means of an organized image of the situation, in relation to the objectives;
  • level 3: projection of future states. This level is associated with the ability to anticipate, meaning to project the states of the elements perceived and understood, into the near future. The exactitude of the prediction is strongly correlated to the exactitude of the two preceding levels. This is the level for forecasting the future state of the perceived situation.

In a general manner, research literature mentions two main states of SA:

  • – it can be insufficient: in this case, human agents do not manage to construct an exact and complete representation of their environment, which is necessary to achieve the objectives that are assigned to them. This can therefore have consequences on the performances obtained;
  • – it can be sufficient: SA is described as sufficient when a human agent perceives and correctly interprets information in the environment in which they find themselves. Thus, they can react in accordance with any unforeseen event, to restrict or cancel all the damage that this event may have brought about. This is the optimal state.

Sufficient SA of the human agent is one of the major challenges that “design for SA” (termed design for situational awareness by Endsley [END 16]), wishes to tackle. This is also the challenge that we wish to take on in our work. In addition to the SA, another selected objective is transparency of the controller.

4.2.2. Transparency

The Larousse dictionary proposes five definitions of the word “transparent”:

  • definition 1: describes a body that transmits light by refraction and through which the objects can be seen with clarity;
  • definition 2: describes a material that allows light to pass through it;
  • definition 3: describes a cloth, paper or skin that is thin enough so that we can see through it;
  • definition 4: with reasons or direction that are easy to guess;
  • definition 5: with a clear functionality that we do not seek to hide from opinion.

These definitions allow us to identify two broad approaches. In approach 1, definitions 1, 2, 3 indicate that an object or system is transparent if it is possible to see through it. In approach 2, relative to definitions 4 and 5, an object or system is transparent if we easily understand its operation. Even though these two approaches appear to be similar, they are functionally very different [OSO 14]. While approach 1 manages the aspect of form, approach 2 manages the aspect of foundation and we have given preference to the latter in our research.

Following this approach, transparent systems communicate information that allows how they function and/or what they do to be understood. In research literature about the interaction between a human agent and an automated system, several authors define the transparency of a system according to this characteristic. For example, Cring and Lenfestey [CRI 09] have suggested that transparency refers to the perceived predictability and the understanding that the human agent has of a specific behavior of automation. Kim and Hinds [KIM 06], for their part, specify that if the human agents do not understand the system’s operational logic, normal and even routine behaviors of the latter can then be perceived as errors by human agents. However, even though a high level of transparency (revealing several pieces of information about operation of the controller) is in general preferred1 by human agents [SAN 14], too great a quantity of information can lead to an information overload and thus affect levels 1 and 2 of the SA.

Some authors have worked on transparency of the controller in the context of driving automobiles. For example, to ensure a successful takeover, Eriksson and Stanton [ERI 15] have proposed that Grice’s [GRI 75] four maxims should be used to increase the transparency of autonomous cars. The following feature among these maxims:

  • – the quantity maxim which stipulates that no unnecessary information must be added to the interface;
  • – the quality maxim which stipulates that all the information sent by the automated system must be true;
  • – the relational maxim which stipulates that all the information must be contextualized and pertinent with respect to the task that the controller is in the process of carrying out;
  • – the manner maxim which stipulates that ambiguity must be avoided and that the information must be presented in a structured and brief manner.

Eriksson and Stanton [ERI 15] thus suggest that by using these maxims in the interface design of a highly automatized car, a good mental representation of the controller’s operation will be produced in the human agent, and thus the transition phase between the automatic mode and the manual mode will be facilitated. Naujoks, Forster, Wiedmann and Neukum [NAU 16] looked at the transparency of highly automatized cars involving communication about future maneuvers. Thanks to their results it has been determined that the vocal method made it easier to extract information that is relevant for understanding the controller’s intentions.

While not much of this research has been carried out, there are models of transparency that exist and which are proposed by certain authors, which can be applied to automobile driving. Among these models there is, for example, the model of Situation Awareness-based Agent Transparency and Lyons’ models, which are the one that we have used (see Figure 4.1). Lyons looked at the question of transparency from two angles: the transparency of a robot for humans and the transparency of humans for robots; where here the robot refers back to an automated system. The transparency of the robot makes reference to the set of information that the robot must communicate to the human [LYO 13].

image

Figure 4.1. Lyons’ models according to Debernard et al. [DEB 16]

This transparency is based on various models: the intention model, the task model, the analytical model and the environmental model.

  • The intention model: here the intention refers to the general objective for which the robot was designed, or even to its overall objective. At this stage, two questions arise: what is the robot’s “raison d’être”? Macroscopically, what functions is it able to carry out? To distinguish “the intention in action” from “the intention in a general sense” defined during the design phase, it appeared more judicious to rename this model “the overall objective model” (of the robot). In the following, in order to avoid any ambiguity with an intended maneuver, we will refer to a “model of the general objective”.
  • The task model: Lyons specifies that the human agent analyzes the robot’s actions in line with a specific cognitive schema2. The task model therefore aims to provide details to help the human agent establish this cognitive schema. In particular, it must contain information that makes it easier to understand a given task, information regarding the robot’s objective at a given instant in time, regarding the robot’s advancement with respect to its objective, its advancement relative to the tasks that can be carried out by the robot and knowledge of the errors that could occur while the task is being carried out. These various pieces of information allow a shared representation to be established, between the robot and a human agent, of actions that must be accomplished for a given task. Communication of the robot’s intention in relation to the objective that it wants to achieve allows the human agent to find out the state of advancement of the task as well as the reason why it carries out a given action or adopts a given behavior. Lyons suggests that communicating this information improves the human agent’s representation of the robot and also helps to improve surveillance of the latter.
  • The analytical model: Lyons specifies that to achieve the objective that is assigned to it during its design, a robot must acquire and analyze a large quantity of data. Given the complexity of this information, the human agent may have difficulties understanding how the robot makes its decisions. The analytical model therefore aims to communicate to the human agent the analytical principles used by the robot during decision-making. This is very useful, in particular in complex situations in which the level of uncertainty is high.
  • The environment model: in difficult and potentially hostile conditions in which time constraints are high (such as military situations), it is essential that robotic systems operate with a dynamic that is in sync with their environment. The robot must communicate to the human agent the understanding that it has of topographical variations, meteorological conditions, threats and time constraints in a particular environment. This type of information indicates to the human agent that which the robot exactly perceives and this will improve the SA that the human agent has of its environment. Moreover, knowing that the robot knows the environmental conditions will help the human agent calibrate the confidence that can be attributed to it.

In addition to the transparency of the robot for the human agent, the transparency of the human agent for the robot must be taken into account. This transparency refers to the set of information that the robot can integrate concerning the state of the human agent, as well as the information shared between the two decision-makers to manage and understand the sharing of work [LYO 13]. This transparency is therefore based on two models.

  • The teamwork model: the robot and the human agent form a team in which each must be able to clearly identify the objectives that are assigned to them. In this model, which we have named the “cooperation model”, the automation must indicate to the human agents the tasks that they are responsible for, the tasks for which the human agent is responsible and at what level of autonomy they operate. This information will allow the human agent to predict the actions of the automation.
  • The human agent model: this is the model that will allow the state of the human agent to be analyzed. Thanks to this model, the robot may be capable of diagnosing the level of stress, cognitive overload, etc. of the human agent. If the robot identifies cognitive overload in the human agent, Lyons suggests that it responds with an increase in the degree of automation for as long as this state is observed and particularly when it is carrying out critical tasks.

Although structuring in sub-models of Lyons’ model is noticeable, the author does not specify, among all this information, which the most important are. Indeed, to cooperate, the agents must understand and share a set of information that constitutes the common frame of reference [DEB 09]. This information comes from exchanges at the “information gathering”, “information analysis”, “decision-making” and “action implementation” levels [PAR 00]. Moreover, Lyons’ models do not specify in which situation such or such information must be presented. Is this in all situations or in abnormal situations? In other words, they do not manage the question of information prioritization, considering the visual load or the context for example. Taking into account the four functions of Parasuraman and his colleagues [PAR 00] seems important in order to answer these questions and to complete the structuring proposed by Lyons’ models.

4.3. Design of a transparent HCI for autonomous vehicles

4.3.1. Presentation of the approach

Since our research has been carried out with a view to cooperation between human agents and autonomous vehicles, we have paid particular attention to establishing a common frame of reference between the two agents. This is the reason why we have transposed Lyons’ models [LYO 13] into the driving domain, using the function proposed by Parasuraman et al. [PAR 00]. This has enabled several principles of transparency to be distinguished in order to connect them with each of these functions. In order to obtain solid information to display, we identified the information requirements of the human agent in manual mode thanks to the first two stages of cognitive work analysis. These were then structured according to Michon’s three levels of control [MIC 85]: a strategic level, a tactical level and an operational level, taking the various display possibilities into account. Once the question of information content was dealt with, creative sessions took place to direct the way this could be presented in augmented reality. In order to direct the display of information as a function of the various available displays and the driving context, rules of display – also known as “rules of supervision” – were defined for the human–computer interactions (HCI) supervisor. Lastly, an experimental validation was designed in order to validate certain principles, since the time available was too restricted in order for them all to be validated. In the following section, we will present the way in which the principles of transparency have been defined.

4.3.2. Definition of the principles of transparency

The school of thought that is advocating human–computer cooperation considers that the human agent and the technical agent must operate in a team, meaning by cooperating to have the best possible performance, while facilitating the work of the other. A common frame of reference must therefore be maintained and/or constructed between these agents, where this frame of reference can be supported by an HCI, which corresponds to what is known as common work space. This CWS can be composed of several attributes such as [DEB 06]:

  • – the formulation of information arising from activities of information acquisition;
  • – the formulation of problems that arise from diagnosis activities;
  • – the formulation of strategies that arise from schematic decision-making activities;
  • – the formulation of solutions that arise from precise decision-making activities;
  • – the formulation of instructions that arise from solution implementation activities.

In Lyons’ models, there is no clearly explained correlation between each sub-model and the four functions of information processing in Parasuraman et al. [PAR 00] or the attributes of the CWS, about which we have just given a reminder. Thus, in our work, direct connections between the principles of transparency arising from Lyons’ models, and these functions, will be established to supply this space.

Moreover, to ensure complete consistency of the approach and to take into account our research topic of autonomous driving, when it becomes possible we will establish relationships between each principle and Michon’s model [MIC 85] which defines automobile driving using three levels: strategic, tactical and operational. In order to have an analysis matrix, we have therefore paired the functions of Parasuraman et al. [PAR 00] and Michon’s levels [MIC 85].

Table 4.1 presents this matrix which we have named the IPLC-based matrix, meaning Information Processing and Level of Control-based matrix, that is, a matrix based on information processing and the levels of control.

Table 4.1. IPLC-based matrix

I.T I.A D.M A.I
S
T
O

In Table 4.1, the letters S, T and O refer respectively to the strategic, tactical and operational levels. Concerning the abbreviations I.T, I.A, D.M and A.I, they refer respectively to the functions “information gathering”, “information analysis”, “decision-making” and “action implementation”. We have defined 12 principles on the basis of Lyons’ models, concerning the LAR project in which the main functions of the controller have been to carry out a change of lane, to adjust its speed and its distance as a function of the speed and the distance between the autonomous vehicle and the surrounding vehicles. For each of them, we have established, where possible, the connection with the IPLC matrix by putting 1 in the corresponding box.

These principles are represented in Figure 4.2.

image

Figure 4.2. The transparency principles associated with Lyons’ models

While the principles that we are going to describe are general and are correlated with Michon’s three levels of control; no particular attention was paid at the strategic level because it is outside the field of investigation of the LAR project. For each of Lyons’ sub-models, we are going to give an example of a defined principle.

4.3.2.1. Principle from the general objective model

As we have previously mentioned, this model refers to the objective for which the robot was designed or even to its overall objective. In other words, in this model, it is a question of communicating to the human agent the “system’s raison d’être”. In autonomous driving, the essential information for the human agent to know is what the autonomous car is able to do and in what context. In the context of the LAR project, the following elements have been specified to characterize the autonomous vehicle under consideration:

  • – automation at level 4 is a restricted level of automatization;
  • – the car allows the human agent to delegate all driving functions to the controllers. Since these functions are critical (in terms of safety), the autonomous mode can only be engaged in certain traffic and environmental conditions. In the context of the LAR project, the car can be in an autonomous mode if the road markers are highly visible and the atmospheric conditions are good (no heavy rain or snow);
  • – the human agent can count on the controller, on the one hand, to survey the changes in conditions of use of the autonomous mode, and on the other hand, to give back control (return to manual mode);
  • – the human agent must be available to carry out occasional controls knowing that a comfortable transition time will be attributed to it before the return to manual mode (between five and seven seconds [GOL 14, LOU 15]);
  • – the car is designed to carry out, in total safety, the tasks of driving and maneuvers that arise during the autonomous mode.

It is essential that the human agent precisely knows all these conditions of use before being able to engage the autonomous mode. Similarly, if the human agent is not informed that the autonomous car is in a position to carry out lane changes in total security, he/she could be surprised in this kind of situation that is initiated by the controller and could subsequently deactivate the autonomous mode.

In a general manner, we have therefore defined the O1 principle which stipulates that:

“The driver must know what the maximum degree of autonomy of the car is, as well as the conditions of use of this level of autonomy. These conditions must be clearly identifiable so that the driver can easily make the link between these and the activation of the autonomous mode”.

This principle does not have any elements in the IPLC matrix.

4.3.2.2. Principle from the task model

The task model contains information about the robot’s objective at a given moment in time. In concrete terms, in autonomous car driving, the human agent must know at a given instant if the vehicle is in the process of carrying out a lane change (to the left or to the right) or if it is continuing in the same lane. This dimension of an ongoing action relates to the operational level from the driving activity model proposed by Michon [MIC 85] and the “action implementation” function of the model by Parasuraman et al. [PAR 00]. This maneuver will have to take place in compliance with formal rules of driving. In order to find out if a rule is being followed, it is first necessary to find out what rule must be followed. For example, the human agent can know that the autonomous car has exceeded a maximum speed of 90 km/h, only if it is known that the maximum speed is in fact 90 km/h. In other words, the “information analysis” function at an operational level is necessary. As a result, we have proposed the T1 principle which states:

“In the established autonomous driving mode, the driver must be informed that the controller controls the car in compliance with the applicable rules and good practices of driving (predictability of behavior of the car). Moreover, the driver must be capable of detecting actions (change of lane or staying in lane, change in speed) that the car is carrying out and understanding them”.

The T1 principle therefore leads to presentation to the driver of the necessary information that corresponds to the operation/information analysis level and to the operational/implementation level of the action in the IPLC matrix (see Table 4.2). For elements that relate to the operational/implementation level of the action, the driver must know that lateral and longitudinal controls are correctly provided for. These controls constitute the basic requirements of any car and therefore of autonomous cars in particular. Since lateral and longitudinal control is continuous, it seemed to be relevant only to present the changes decided on by the controller with respect to the stable situation “stay in lane and maintain speed”. Thus, at each instant in time, one of the nine following situations determined by the controller will be presented to the driver:

  • – continue in lane and accelerate;
  • – continue in lane and decelerate;
  • – continue in lane and maintain speed;
  • – change of lane to the right and accelerate;
  • – change of lane to the right and decelerate;
  • – change of lane to the right and maintain speed;
  • – change of lane to the left and accelerate;
  • – change of lane to the left and decelerate;
  • – change of lane to the left and maintain speed.

Table 4.2. IPLC matrix for the T1 principle

I.T I.A D.M A.I
S
T
O 1 1

4.3.2.3. Principle from the analytical model

The analytical model aims to communicate to the human agent the “reasoning mechanisms” used by the robot to decide. The autonomous system must be in a position to explain the decision-making strategies if the human agent asks for them. In car driving, the human agent must know how the lane change needs to take place, and in particular know of the constraints that are taken into account, like other traffic vehicles for example. As a result, the A1 principle stipulates that:

“In the established autonomous mode, the driver must know how each maneuver is carried out”.

The information to display to the driver that corresponds to the A1 principle are related to “decision-making” in the IPLC matrix, on both the tactical and operational levels (see Table 4.3). The A1 principle echoes that of Billings [BIL 96] which stipulates that “the autonomous system that controls air traffic must carry out tasks in a manner that is comprehensible to controllers”.

Table 4.3. IPLC matrix for the A1 principle

I.T I.A D.M A.I
S
T 1
O 1

4.3.2.4. Principle from the environment model

In the environment model, the robot must communicate to the human agent the understanding it has of the site topography, the weather conditions and the time constraints in a particular environment. In an autonomous car, the sensors constitute the primary interface with the external environment. They detect the dynamic or static entities in place in that environment. The sensors provide input data that is going to cause specific behavior of the autonomous car that the human agent must understand. Access to this information by the driver allows him/her to verify that the sensors function correctly and that he/she has the same representation of the environment as the vehicle. The E1 principle indicates that:

“In the established autonomous mode, the driver must have sufficient perception of what the autonomous car perceives in order to carry out his/her analyses and make his/her decisions. It must be ensured that the autonomous vehicle has the necessary information to make that decision”.

4.3.2.5. Principle from the cooperation model

Lyons stipulates that each agent must clearly identify the tasks that are assigned to it. In an SAE level 4 car, two modes of driving are possible: the autonomous mode during which the controller is in charge of driving and the manual mode during which the human agent takes care of the various controls required. It is therefore critical for the human agent to know what the active mode is. Indeed, a confusion of modes can lead to takeover of control in bad quality conditions. In the best case, taking back control takes place late on and can cause an incident; in the worst cases, a confusion of mode could cause an accident. We have therefore laid out the C1 principle which states:

“The driver must know which mode is activated at each moment in time in order to avoid all mode confusion”.

There are no elements in the IPLC matrix that are able to express this principle.

4.3.2.6. Principle from the human agent model

The human agent model suggests that the robot carries out an analysis of the human agent from the emotional, cognitive and even physical point of view. The robot would then be able to assess a human’s state of stress, cognitive overload or any other state. We have therefore deduced the H1 principle which states that “the controller must survey the state of the driver and understand it to adapt the information displayed if necessary”.

There are no elements in the IPLC matrix that are able to express this principle. In addition, in the LAR project, there is no driver monitoring that would allow us to observe and analyze the human agent. Consequently, the fact that the driver is ready to take over is ascertained mainly by the fact that he/she has correctly placed his/her hands on the steering wheel and his/her foot on the pedal. Ideally, adjustment and “prioritization” of the information depending on the position of the human agent’s gaze or their workload would lead to an adaptive interface.

These principles of transparency that are applied to autonomous driving do not specify the data and the tangible information to be represented on the HCI to make the controller comprehensible. To define this information, we have resorted to cognitive work analysis.

4.3.3. Cognitive work analysis

Cognitive work analysis (CWA) is a methodology that is typically used to design interfaces described as ecological. These interfaces have the objective of making the constraints and the possibilities of actions visible in a given field of work and facilitating the use of a low-cost cognitive control method based on automations (skill-based) or on rules (rule-based) rather than on knowledge (knowledge-based). Their primary objective is to help the operators face new situations in complex socio-technical systems [VIC 02]. Their development relies on three stages of the CWA: work domain analysis, task analysis and competencies analysis.

The ecological interfaces were initially designed to support control activities for processes (nuclear, petrochemical) or driving. For a long time, they have also been developed to facilitate the supervision of autonomous systems: inhabited submarine vehicles [KIL 14] as well as autonomous vehicles [REV 18]. In the context of the supervision of an autonomous vehicle, these interfaces have different aims depending on the phase to which they are addressed: takeover phase or autonomous driving phase. In the first case (documented by Revell et al. [REV 18]), they are designed to make it easier to manage an unexpected situation, whereas in the second case (documented by Kilgore and Voshell [KIL 14]), they have the primary objective of making it easier to understand the operation of the system and thus join with the objective of transparency. Our work comes closer to the study by Kilgore and Voshell, since it relates to the autonomous driving phase. The takeover phase, to be carried out correctly, requires the human agent to have a correct mental representation of the environment and of the behavior of the autonomous agent.

In the following sections, we present the results from two analyses that have been carried out: analysis of the field of work and analysis of the control task.

4.3.3.1. Work domain analysis

Work domain analysis was proposed by Rasmussen [RAS 86]. Due to the formative nature of cognitive work analysis, work domain analysis focuses on the constraints related to the safety and performance of a complex system rather than considering specific scenarios [NAI 01, NAI 13]. These constraints can be represented by an abstraction hierarchy (AH) that includes five levels which are, in decreasing order of abstraction: domain purpose, values and measures of priority, functions associated with the purpose (or general functions), processes associated with objects (or physical functions) and physical objects. The links between the levels in the abstraction hierarchy express “means–ends” relationships, also known as “how–why” relationships. Effectively, the connections between a target function and the lower levels of abstraction indicate how a function is operationalized. Inversely, the links between a target function and the higher levels of abstraction indicate why this function exists.

4.3.3.1.1. Abstraction hierarchy

An abstraction hierarchy has already been implemented in the field of car driving. Several research publications have looked at this topic (see, for example, [SAL 07]). Concerning our issue, in Figure 4.3, we present an extract of the analyses made using many literature articles such as [JEN 08], interviews with LAR project collaborators and several iterations of which the first are consultable in [POK 15a, POK 15b].

In Figure 4.3, the information is distributed over several levels:

  • domain purpose: the system’s “raison d’être”. In the case of the task of driving, the identified purposes are: road transport that is effective, comfortable and completely safe;
  • values and priority measures: these correspond to the principles, priorities and values to follow in order to achieve the domain purposes. In our case, it is a case of complying with the defined itinerary, the separation distances between vehicles, speed limits and avoiding collisions between vehicles;
  • functions related to purposes: also known as general functions, they represent all the main functions that the system must carry out. We have identified five functions, including the function “monitoring vehicle maneuvers in the driving environment”, for example;
  • functions related to objects: these functions, also known as physical functions, correspond to functions supporting general functions. For example, the function “detect and recognize speed panels” has been highlighted;
  • physical objects: these are sensors, mechanical parts and/or algorithms that guarantee the functions that relate to the objects. For example, navigation systems.
4.3.3.1.2. The definition of the information requirements for driving using work domain analysis

We have modeled the abstraction hierarchy (AH) for the driving domain independently from the agent that carries it out, whether a technical or human agent. As Li and Burns [LI 17] have specified, a function in the AH can be the responsibility of the technical agent, the human agent or both (shared control). In other words, the AH can be a basis for the allocation of functions. In their research, Li and Burns have used the AH in the field of financial transactions in order to highlight two scenarios of function allocation, one that represents low automation and the other high automation [LI 17]. These authors have also specified that pointing out the function relating to objects (physical functions) on an interface allows the losses of situational awareness in the human agent to be restricted.

This result is of particular interest, because it allows us to identify the potential information that will then lead to the definition of the information to be presented effectively in order to maintain/establish sufficient situational awareness of the human agent. This identification is carried out from physical functions determined in the abstraction hierarchy. The speed panels and the action of changing lanes constitute examples of potential information.

Work domain analysis has allowed us to define the general information requirements of the driving task, without concentrating on particular situations. However, given that our research attributes particular attention to the maneuver of changing lanes, it is necessary to identify information requirements relating to this maneuver in a specific manner.

Therefore, we have carried out the analysis of this maneuver by using the control task analysis (ConTA), in order to identify the information-processing activities performed by the human agent. We present this analysis in the following section.

image

Figure 4.3. Extract of the analysis of the field of work. The blue lines show some relationships that exist between the boxes from one level to the next

image

Figure 4.4. Rasmussen’s double scale for changes of lane

4.3.3.2. Analysis of the control task

Analysis of the control task identifies the needs associated with the known operational modes of a system. One of the tools used by the ConTA is the “decision ladder”, which presents the various stages of diagnosis and decision-making activities in terms of steps of information processing and intermediate states of knowledge. The stages do not necessarily align linearly. An agent (human or technical) can take shortcuts.

For example, perception of a signal can imply an immediate corrective action without going through the intermediate stages of information analysis. Consequently, certain stages can be short-circuited.

In our research, we have developed a control task analysis of the lane change as it is carried out by a human agent. However, we need to remind ourselves that in the LAR project, the car undertakes the lane change in an entirely autonomous manner. Our approach is therefore based on the fundamental hypothesis that the human agent will need the technical agent to send it information similar to that which it would have required if it had personally initiated the action.

4.3.3.2.1. The decision ladder

The methodology is similar to that which was previously described. In effect, we focused on the lane change maneuver by consulting several articles, in particular [OLS 03, LEE 04 and NAR 08]. The analysis has been completed thanks to interviews with Renault employees. The various construction stages of the “decision ladder” have been carried out using the Rasmussen terminology [RAS 86]. In the context of our research, we have focused on high-speed roads in accordance with the specifications of the LAR project. Figure 4.4 presents an extract of our results.

In Figure 4.4, the circles that represent the exits in terms of information are as follows:

  • the objective: in a lane change, the objective is to go from a lane of origin to a destination lane (change of lane) in a safe and efficient manner, taking account of the time windows and navigational constraints;
  • the alert: these are questions to which positive answers are going to bring about a lane change maneuver. An identified alert is: “is it necessary to go onto a motorway?”;
  • set of observations: we have identified nine types of information, among which is “what is the longitudinal and lateral position of the ego-vehicle?”;
  • state of the system: we have identified six states, including: “what are the intentions of the neighboring vehicles?”;
  • the options: four questions can be asked, among which: “is it possible to make a lane change whilst complying with speed limits?”;
  • the chosen objective: in our case, since the objective is unique, it is necessarily chosen. It is therefore a case of going from an original lane to a destination lane (change of lane) in a safe and efficient manner, taking into account the time windows and navigational constraints;
  • the target state: we have identified four target states, including: “the lane change must be carried out in compliance with the speed limits”;
  • the task: in our case, the task to be carried out is related to the trajectory that the vehicle must follow;
  • the procedure: we have defined five successive actions/controls that include: “activating the indicator to indicate the beginning of the lane change to other users”.
4.3.3.2.2. Definition of the information requirements for the lane change, using control task analysis

In order to put the results of our analysis in relation with the common work space that must be established between the human agent and the controller, we have divided the Rasmussen “decision ladder” into four regions that correspond to the four functions of information processing that can be automatized [PAR 00]: “information gathering”, “information analysis”, “decision-making” and “action implementation”. This approach has already been used by Li and Burns [LI 17]:

  • – the alert and set of information come from the function “information gathering”;
  • – the state of the system arises from the function “information analysis”;
  • – the options, the chosen objective and the target state come from the “decision-making” function;
  • – the task and the procedure come from the “action implementation” function.

The information requirements are derived directly from this “decision ladder”. For example:

  • – for the “information gathering” function, the identified alert “it is necessary to return to the preferential lane?” imposes communication of this alert on the human operator;
  • – for the function “information analysis”, the state of the situation “what are the intentions of the other vehicles?” requires these intentions to be communicated;
  • – for the “decision-making” function, the decision “the lane change must be rapid” requires communication of the decision to make a quick lane change;
  • – for the function “action implementation”, the action “stabilizing the ego-vehicle in the destination lane by deceleration” obliges this deceleration to be transferred.

Thus, we have extracted the information requirements function by function, and recorded them.

This analysis (ConTA), coupled with the previous one, allows the majority of information requirements to be defined for the interface of the autonomous car. We talk about a majority because the information requirements regarding cooperation between the controller and the human agent do not appear in this analysis. The CWA allows these requirements to be defined in the following stages, in particular in the SOCA (social organization and cooperation analysis). In the context of the LAR project, allocation of the functions was set up from the beginning: the controller carries out all the maneuvers in autonomous mode and the human agent must take over to control the vehicle in manual mode. Owing to this, the information requirements that arise from controller-human agent cooperation can be directly defined on the basis of the LAR project specifications and the principles of transparency that arise from the model of transparency. In the following sections, we present the experimental protocol implemented to validate these principles.

4.4. Experimental protocol

4.4.1. Interfaces

The objective of our work is to validate or invalidate the principles of transparency that are defined in the context of the use of an autonomous car in level 4 of the SAE taxonomy and, consequently, to evaluate the optimal level of transparency for an HUD interface that is specific to the LAR project. Therefore, we have decided to activate or deactivate the display of information associated with these principles following the four information processing functions in Parasuraman et al. [PAR 00]: “information gathering”, “information analysis”, “decision-making” and “action implementation”. A total of 16 interfaces can thus be defined. However, we have only focused on five interfaces among the 16 for two main reasons:

  • time constraints: since human resources related to the LAR project are limited and time is limited, it was not possible to develop each of these 16 HCIs;
  • financial constraints: having 16 HCIs to evaluate requires a large number of participants to be available in order to validate them.

Table 4.4 sets out the functions present in each of the HCIs that have been selected and developed.

In emphasizes basic information; Iw emphasizes information related to information gathering and action implementation; Ii is like Iw with the additional information related to situation analysis; It emphasizes information related to information gathering, information analysis and decision-making; Is presents all information.

Table 4.4. Specifications of the five HCIs according to the functions of [PAR 00]. “1” means that the information relating to the function are present is “0” means the information is absent

Collection of information Analysis of the information Decision-making Implementation of the action
In 0 0 0 0
Iw 1 0 0 1
Ii 1 1 0 1
It 1 1 1 0
Is 1 1 1 1

In does not contain any information from the principles of transparency in relation to the functions defined by Parasuraman et al. [PAR 00], as shown in Table 4.4. However, it is not lacking in information. Indeed, we have identified the basic information that makes up this interface and that makes it the reference interface. Identified on the basis of the analyses of interfaces of recent vehicles in circulation and certain prototypes of autonomous cars, this information includes in particular:

  • – the instantaneous speed of the autonomous vehicle;
  • – the speed limit defined for the section in which the autonomous car is located;
  • – an indication that the autonomous mode is active;
  • – the duration of the autonomous mode before entering a zone that requires a takeover;
  • – navigation: this is not as dynamic as it is with a GPS. It provides an insight in the form of a map of the autonomous car’s journey, with an indication of the local journey to be taken in the event of a motorway split or exit from the motorway;
  • – the blind spot: this indicates that another vehicle is present outside of the autonomous car’s range of vision.

Concerning the HCI evaluations, they will be based on the differences observed during data analysis. Thus:

  • – the difference between Iw and Ii will allow the added value of information from the “information analysis” function to be evaluated;
  • – the added value of information from the “decision-making” function to be evaluated;
  • – the difference between Is and It will allow the added value of information from the “action implementation” function to be evaluated.

The various symbols and annotations used for these interfaces have been defined after several creative sessions. Some of them are presented in [POK 16].

4.4.2. Hypotheses

During manual driving, the human agents construct a representation of the situation (this is situational awareness) on the basis of elements relating to their perception, their understanding and the projection that they make of it. This representation must be relevant, meaning that it must allow them to acquire the right information at the right moment in time. In the context of the autonomous car, the HCI that we have defined all aim to facilitate perception but in addition, for some of them, understanding of the situation and projection of its state into a near future, such as the fact, for example, that the autonomous car is going to take an exit. Therefore, we make the hypothesis that the level of transparency of the HCI’s is going to have an impact on the SA of the drivers (hypothesis 1). Moreover, we also make the hypothesis that the least transparent HCI (HCI 1) will not be the most appreciated of the human agents and that the latter will have a preference for interfaces that clearly show the actions that the controller carried out or is going to carry out (hypothesis 2).

4.4.3. Participants

A total of 45 people, ranked as per the distribution in Table 4.5, have taken part in the experiment that took place in the DrSIHMI3 simulator. The average age of the 23 women was 43 years with a standard deviation of 9.23; the age for the men was 43.5 years with a standard deviation of 9.53. The duration in years for which they have held a license was 22.4 with a standard deviation of 10.33.

Table 4.5. Composition of the participants according to the criteria “sex” and “age”

Sex Age group (in years) Number of people
Female 25–44 13
45–65 10
Male 25–44 11
45–65 11

For each participant, the test lasted for approximately two hours, with effectively 40 minutes of driving time. In a few cases, the two hours were exceeded, in particular when the participant spent more time in debriefing interviews carried out after the tests. In any case, we set the maximum duration of the test at two hours and thirty minutes at the time when the study started for each participant in order to avoid any tiredness. This limit was complied with.

4.4.4. Equipment

The study was carried out in the simulator at the Technology Research Institute System X (IRT SystemX), the DrSIHMI (Figure 4.5). This static simulator is made up of the following elements:

  • – a car driving seat: this includes a steering wheel, the accelerator, brake and clutch pedals, a gear stick, a button box, a non-AR HUD, an AR-HUD and a remote display:
    • - the button box allows the driver to show the degree of discomfort felt during a situation, with buttons 1 and 2,
    • - the non-AR HUD: given that DrSIHMI does not have a physical dashboard, this display, with dimensions 15*4°, allows classic information to be displayed such as the speed of the ego-car,
    • - the AR HUD: with dimensions 15*5°, this display allows virtual information to be projected in the driving environment for the perception of augmented-reality elements,
    • - the remote screen contains information about the strategic level. This screen presents an image of the journey for the autonomous vehicle to follow, as well as the distance to the arrival point;
  • – a curved projection screen with an opening of 180°.

image

Figure 4.5. DrSIHMI

In this equipment context, and in order to carry out the experiments, several scenarios have been developed.

4.4.5. Driving scenarios

A driving scenario sets out a specification of all the events that can occur in the environment. These are, in particular, traffic vehicle maneuvers, modification of the road topology (e.g. bend or straight road) or weather conditions. In the context of our work, the scenarios have been developed by specific technical resources on the basis of specifications that we have provided, thanks to the software SCANeR. Developed by OKTAL SA, SCANeR models scenarios use graphics involving “condition-action” pairs (“if…, then”) in the driving environment [REY 00]. The driving scenarios are designed as a succession of driving scenes in a given environment or terrain.

Figure 4.6 shows a bird’s eye view of the terrain used.

We have defined several scenes: normal scenes such as the change of lane of the autonomous vehicle due to a separation of motorway, and less everyday scenes such as overtaking two heavy goods vehicles combined with the arrival of a police vehicle.

image

Figure 4.6. Terrain used during the simulation. The journey in orange is the one that is effectively taken by the autonomous car in the simulation. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

Before the beginning of each test, it was specified to each participant that the behavior of the autonomous car may not be homogeneous for the duration of the scenario. Effectively, we have specified that the autonomous vehicle may accelerate and brake abruptly, and that it may in certain circumstances reduce the security distances. This presentation phase, carried out on a computer, mainly aimed to comply with the principles arising from the model of the general objective4. These principles aim to make the human agent understand what the autonomous car is and what it can do. Varying the relative order of the scenes has allowed four different scenarios to be defined:

  • – scenario 0: this lasted for three minutes and thirty seconds and constituted the learning scenario;
  • – scenarios a, b and c: of an average duration of nine minutes, these similar scenarios (driving scenes that are identical but presented in a different order) made up the scenarios for each drive. They were associated with one of the five HCIs and submitted in a different order to each participant.

Table 4.6. Number of participants per HCI/scenario

HCI/scenario Sa Sb Sc
In 15 15 15
Iw 8 8 8
Ii 8 7 7
Is 7 7 7
It 8 8 7

The numbers of participants for each interface and for each scenario are presented in Table 4.6.

During each of these scenarios, various measures have been carried out.

4.4.6. Measured variables

In the context of these experiments, we have collected data about the cognitive activities of the participants (information gathering and situational awareness) on the one hand, and about elements relating to the user experience (satisfaction) on the other hand. Table 4.7 presents all the variables studied as well as their possible values (for discrete quantitative variables) or their means (for qualitative variables).

Table 4.7. Variables used in the experimental process

Type of variables Labels Description Interval of values or means
Provoked
independent
variables
F Freeze 1 (presence); 0 (absence)
DN Driving number 1, 2 or 3
I Interface In, Iw, Ii, Is and It
S Driving Scenario a, b, c
Dependent variables G1 to G6 Answers to SAGAT questions 1 (correct); 0 (false)
FI First Interface In, Iw, Ii, Is and It
SI Second Interface In, Iw, Ii, Is and It
TI Third Interface In, Iw, Ii, Is and It

The dependent variables have been collected at different times:

  • – answers to questionnaire Q1 regarding the situational awareness were collected only in the first driving number (DN 1), on one of the following two scenes: the first corresponds to an overtaking of a train of vehicles with a lane change of one of them; the other corresponds to an overtaking of two heavy goods vehicles with the arrival of a police vehicle. They aim to evaluate the situational awareness;
  • – classification of the HCI into first, second and third positions was carried out at the end of the third experimental condition. This classification aims to evaluate the acceptability of the HCI.

In the experiment validation, we have manipulated four independent variables or “factors”: the driving number (DN), the interface (I), the freeze situation (F) and the driving scenario (S). Moreover, there were control factors (or independent variables) from questionnaire Q0 about driving habits such as the age of participants (A) and their gender (G).

4.4.7. Statistical approach

Given that several variables have been measured, these have all been considered through the bias of a multivariate approach, instead of carrying out univariate analyses several times. In the family of multivariate analysis tools, the classifications and regrouping techniques are well-known [JOH 07]. In the context of our research work, we have opted for the classifications because they present the advantage of “easily” showing the relations between the variables and the effects of each of the variables [BEN 92]. The presence of quantitative variables and qualitative variables has led us to use multiple correspondence analysis (MCA) in place of the principle component analysis (PCA) that is generally used [BEN 92].

If the MCA is used more in the analysis of qualitative variables, in particular in multiple choice questionnaires or enquiry data [BEN 92, CES 14, BIL 16], it is important to note that it leads, with quantitative variables coded in a fuzzy way, to a demonstration of nonlinear relations. Consequently, the models used in it are more complex than those of the PCA [BEN 92, LOS 14]. For a first analysis, the fuzzification model (FM) is used for arithmetical means and value intervals. The FM is less sensitive to extreme values (values that are sometimes erroneous) because the adjusted average is less sensitive to the abnormal values than the arithmetical average. Before going further, we will spend time on the following considerations:

  • – consideration 1: to evaluate the impact factors, a summarizing operation is necessary. This summary requirement leads to a lower loss of information with local fenestration than without [LOS 01, SCH 15];
  • – consideration 2: there are other models of scale change but this offers good results (see [LOS 14] for comparative studies);
  • – consideration 3: each histogram and its corresponding window were carefully verified.

Applications of the MCA in the field of transport exist [CHA 13, CES 14, BIL 16] with qualitative variables [SCH 15] and with fuzzified quantitative variables. The independent variables that we have selected are as follows: the participant (P), the driving number (DN), the interface (I), the driving scenario (S) and the freeze situation (F). The data collected during each experimental condition that we present in this chapter is issued for each participant: from questionnaire G on the SA (G1–G6), from the ranking of HCIs in first, second and third positions (respectively FI, SI and TI). There were only 45 datasets for the measures of situational awareness (G) because these were only collected during the first driving number (DN1). Concerning the FI (HCI ranked in first position), SI (HCI ranked in second position) and TI (HCI ranked in third position), they were only provided at the end of the third driving number (DN3). In the following sections, we present the results obtained from this experiment.

4.5. Results and discussions

4.5.1. Situational awareness

The situational awareness was evaluated during the first drive. During this drive, questionnaire G was administered when a freeze occurred (see Table 4.8). The questions asked were related to six items: vehicle action; vehicle dynamic; vehicle speed; future vehicle action; number of vehicles in front of the autonomous vehicle; and number of vehicles behind the autonomous vehicle.

Table 4.8. Questionnaire G on situational awareness

Level of the SA Questions
Level 1: perception G1. What action is your car taking?
a. Continuing in lane
b. Change of lane to the left
c. Change of lane to the right
G2. What dynamic is your car undergoing?
a. Acceleration
b. Deceleration
c. Constant speed
G5. How many cars are there in front of the autonomous
vehicle?
0 1 2 3 4
G6. How many cars are behind the autonomous vehicle?
0 1 2 3 4
Level 2: comprehension G3. Is the vehicle below the legal speed limit?
Yes No
Level 3: projection G4. What is the future action of your vehicle?
a. Continuing in lane
b. Change of lane to the left
c. Change of lane to the right

The response to each of these questions (G1–G6) has been coded using two modes: 0 and 1. They correspond respectively to a false answer and a correct answer.

4.5.1.1. Results

In Figure 4.7, we present hypothesis tests to evaluate the impact of factors I (HCI), S (Scenario), F (freeze scenario), G (gender), and questionnaire G relative to situational awareness in an inferential context. Each line corresponds to each of the questions in questionnaire G and each column corresponds to each of the five factors. The cyan bars show the histogram for each of the 30 factor/variable pairs and the barographs show the p values.

4.5.1.2. Discussion

We have observed a significant effect of variable I on G2 (dynamic of the vehicle). Regarding this question, the number of correct answers was very low in In and the highest number of correct answers was observed in Iw. In other words, the HCI that presents elements relating to the collection of information and the implementation of action allows the participants to have a better perception of the dynamic of the autonomous vehicle than one that does not present any of these elements. One of the specific elements in Iw that highlights the dynamic of the autonomous vehicle is the table of nine boxes, which tends towards validation of the principle of T1 transparency and which stipulates in part that “the driver must be capable of detecting actions (change of lane, continuation in lane, change of speed) that the car is in the process of carrying out, and understanding them”.

Similarly, there is a significant effect of I on G6 (number of vehicles behind the autonomous vehicle). Concerning this question, the number of correct answers was very low in Is and the highest number of correct answers was observed in Iw. In other words, with the HCI that displays the collection of information and the implementation of the action, the participants perceived the number of vehicles behind the autonomous vehicle better than with the HCI, in which not only the results of information collection and information analysis, but also the results of decision-making are presented. This result is even more surprising than in Is – there were explanations about the presence of a police vehicle or a vehicle arriving quickly from behind. We establish the hypothesis that this HCI, which presents numerous pieces of information, caused an information overload. Effectively, as Bawden and Robinson have mentioned [BAW 09], a large quantity of pieces of information about an interface can cause a reader to not read them all.

These two results tend towards the validation of hypothesis 1 according to which the transparency of HCIs causes differences in the representation of the situation that the agents set up. In summary, from the point of view of cognitive activities, Iw (HCI that shows the information relating to information collection and action implementation) leads to better results from the point of view of the situational awareness.

image

Figure 4.7. Hypothesis tests to evaluate the impact of the factors I, S, SF and G concerning questionnaire G in an inferential context

4.5.2. Satisfaction of the participants

The participants carried out three drives and therefore saw three HCIs. At the end of the three drives, the following question was asked of them, by way of conclusion: “by order of preference, which are the HCIs that have best allowed you to understand the system? In first position, second position?”. For example, a participant X who has seen In, Iw and Ii could choose to rank Ii in the first position, Iw in the second position and In in the third position.

4.5.2.1. Results

The data are different to the previous for two reasons:

  • – reason 1: the values correspond to relative evaluations, whereas the previous values correspond to absolute values;
  • – reason 2: the 45 participants have not all evaluated the same interfaces, with the exception of the In interface.

Therefore, a specific procedure has had to be used. The six graphs in Figure 4.8 clearly show that the In interface is appreciated least of all by the participants, in comparison to the four others.

image

Figure 4.8. Ranking of the I interfaces by each participant. For a color version of this figure, see: www.iste.co.uk/vanderhaegen/automation.zip

For a more general comparison, Figure 4.9 shows – for each interface – the ratio between the number of times that it has been ranked in first position and the total number of times that it has been tested. For example, for the In interface, these two numbers are worth 9 and 45 respectively, which provides a ratio of 8.9 %.

Figure 4.9 shows that the Is interface is slightly better appreciated than the It interface.

image

Figure 4.9. Ranking of the interfaces in first position

4.5.2.2. Discussion

The participants’ preference is less often related to the In interface than to the others, which tends to validate hypothesis 5 which states that the opaque HCI will be the least appreciated by human agents. In addition, interfaces Is and It were the most voted for. On the one hand, this result agrees with the results of Sanders, Wixon and Schafer [SAN 14] who have found that a high level of transparency is preferred by individuals. In addition, this result corroborates a study by Swearingen and Sinha [SWE 02] which suggests that in general people prefer interfaces that are perceived as transparent.

On the other hand, given that Is and It were the only HCIs to incorporate the elements of decision-making in comparison with other methods, this leads us to believe that this function brings added value to the transparency of an HCI. The T2 and T3 principles have been validated. They stipulate respectively that “in the established autonomous mode, the driver must be capable of perceiving the intention of the autonomous car (the maneuver that it is going to do) and understand why this maneuver is going to be carried out” and that “in the established autonomous mode, the driver must be informed of all maneuvers that can interrupt another that is moving (change of plan)”.

4.6. Conclusion

In this chapter, we have presented an interface design approach for a car with a high level of automation, an SAE level 4 car that can be in autonomous mode or in manual mode. In the first mode, lateral control and longitudinal control of the car are entirely managed by the controller, whereas in the other, the human agent takes care of these two controls.

In order to design an interface that allows the controller to be comprehensible to the human driver and allows the understanding that they have of the situation before reverting to manual mode to be maintained or reconstructed, we have used the notion of transparency by using Lyons’ models. Using this, we have announced 12 principles of transparency thanks to each model [LYO 13]. In order to instantiate each of these principles to define the information to be displayed to the driver, the first two steps of cognitive work analysis have been implemented.

Then, we defined five interfaces to be able to distinguish between the contribution of each information according to the information processing functions in Parasuraman et al. [PAR 00] and to demonstrate the interest of principles. These interfaces have been tested in a driving simulator on a sample of 45 people to validate the principles. The hypotheses that we have presented were in relation to the impact of the transparency on the situational awareness and user satisfaction. On the contrary to hypothesis 1, the scores related to situational awareness have not increased with information contributions. In general, the interfaces with the least instantiated functions have resulted in the best answers to questions by the users.

Concerning user satisfaction, the interfaces where all the functions were represented were the most appreciated, which corroborates hypothesis 2.

Thus, there appears to be a contradiction between the two measurements taken, between the cognitive activities, on the one hand, and user experience, on the other hand. Indeed, display of all the principles is appreciated by the users even though they do not all take part in improvement of their situational awareness.

Taking these results into account, several research perspectives come to our attention. In particular, it would be interesting to redo an experiment in which the human agent could carry out a secondary task and to evaluate from then on the situational awareness for each of them. This would allow us to see in particular whether the results that we have obtained are corroborated or unconfirmed. On the contrary, more questions could be integrated into the questionnaire about situational awareness in order to develop it further and to have a more precise representation of the user’s situational awareness. In the medium term, it appears interesting, with the integration of all the previous possibilities, to introduce a phase of take over in order to evaluate the impact of the principles of transparency on the quality of the transition between the autonomous mode and the manual mode.

4.7. Acknowledgments

This work was possible thanks to funding from the French government in the context of the PIA program (Programme pour les investissements pour l’avenir – Program for Future Investments) at the SystemX Institute of Research and Technology.

4.8. References

[BAI 83] BAINBRIDGE L., “Ironies of automation”, Automatica, vol. 19, no. 6, pp. 775–779, 1983.

[BAW 09] BAWDEN D., ROBINSON L., “The dark side of information: overload, anxiety and other paradoxes and pathologies”, Journal of Information Science, vol. 35, no. 2, pp. 180–191, 2009.

[BEN 92] BENZECRI J.P., “Validité des échelles d’évaluation en psychologie et en psychiatrie et corrélations psychosociales”, Les cahiers de l’analyse des données, vol. 17, no. 1, pp. 55–86, 1992.

[BIL 96] BILLINGS C.E., Human-centered aviation automation: Principles and guidelines, NASA technical memorandum, no. 110381, 1996.

[BIL 16] BILLOT-GRASSET A., AMOROS E., HOURS M., “How cyclist behavior affects bicycle accident configurations?”, Transportation Research Part F: Traffic Psychology and Behaviour, vol. 41, pp. 261–276, 2016.

[CES 14] CESTAC J., PARAN F., DELHOMME P., “Drive as I say, not as I drive: Influence of injunctive and descriptive norms on speeding intentions among young drivers”, Transportation Research Part F: Traffic Psychology and Behaviour, vol. 23, pp. 44–56, 2014.

[CHA 13] CHAUVIN C., LARDJANE S., MOREL G. et al., “Human and organisational factors in maritime accidents: Analysis of collisions at sea using the HFACS”, Accident Analysis & Prevention, vol. 59, pp. 26–37, 2013.

[CRI 09] CRING E.A., LENFESTEY A.G., Architecting human operator trust in automation to improve system effectiveness in multiple unmanned aerial vehicles (UAV) control, PhD thesis, Air Force Institute of Technology, Ohio, United States, 2009.

[DEB 06] DEBERNARD S., Coopération homme-machine et répartition dynamique des tâches – Application au contrôle de trafic aérien, HDR, Université de Valenciennes, France, 2006.

[DEB 09] DEBERNARD S., GUIOST B., POULAIN T. et al., “Integrating human factors in the design of intelligent systems: An example in air traffic control”, International Journal of Intelligent Systems Technologies and Applications, vol. 7, no. 2, pp. 205–226, 2009.

[DEB 16] DEBERNARD S., CHAUVIN C., POKAM R. et al., “Designing human–machine interface for autonomous vehicles”, IFAC-PapersOnLine, vol. 49, no. 19, pp. 609–614, 2016.

[END 95] ENDSLEY M.R., “Toward a theory of situation awareness in dynamic systems”, Human Factors, vol. 37, no. 1, pp. 32–64, 1995.

[END 16] ENDSLEY M.R., Designing for Situation Awareness: An Approach to User-centered Design, CRC Press, Boca Raton, United States, 2016.

[ERI 15] ERIKSSON A., STANTON N.A., “When communication breaks down or what was that? – The importance of communication for successful coordination in complex systems”, Procedia Manufacturing, vol. 3, pp. 2418–2425, 2015.

[GOL 14] GOLD C., LORENZ L., BENGLER K., “Influence of automated brake application on take-over situations in highly automated driving scenarios”, Proceedings of the FISITA 2014 World Automotive Congress, Maastricht, The Netherlands, 2014.

[GRI 75] GRICE H.P., “Logic and conversation”, Syntax and Semantics, vol. 3, pp. 41–58, 1975.

[HOF 15] HOFF K.A., BASHIR M., “Trust in automation: Integrating empirical evidence on factors that influence trust”, Human Factors, vol. 57, no. 3, pp. 407–434, 2015.

[JEN 08] JENKINS D.P., STANTON N.A., SALMON P.M. et al., “Using cognitive work analysis to explore activity allocation within military domains”, Ergonomics, vol. 51, no. 6, pp. 798–815, 2008.

[JOH 07] JOHNSON R.A., WICHERN D.W., Applied Multivariate Statistical Analysis, 6th ed., Pearson Prentice Hall, Upper Saddle River, United States, 2007.

[KIL 14] KILGORE R., VOSHELL M., “Increasing the transparency of unmanned systems: Applications of ecological interface design”, International Conference on Virtual, Augmented and Mixed Reality, Heraklion, Greece, 2014.

[KIM 06] KIM T., HINDS P., “Who should I blame? Effects of autonomy and transparency on attributions in human–robot interaction”, The 15th IEEE International Symposium on Robot and Human Interactive Communication, Columbia, United States, 2006.

[LEE 04] LEE S.E., OLSEN E.C., WIERWILLE W.W. et al., A comprehensive examination of naturalistic lane-changes, Report, the National Highway Traffic Safety Administration, Washington, United States, March 2004.

[LI 17] LI Y., BURNS C.M., “Modeling automation with cognitive work analysis to support human–automation coordination”, Journal of Cognitive Engineering and Decision Making, vol. 11, no. 4, pp. 299–322, 2017.

[LOS 01] LOSLEVER P., “Obtaining information from time data statistical analysis in human component system studies (I). Methods and performances”, Information Sciences, vol. 132, nos 1–4, pp. 133–156, 2001.

[LOS 14] LOSLEVER P., “Membership function design for multifactorial multivariate data characterizing and coding in human component system studies”, IEEE Transactions on Fuzzy Systems, vol. 22, no. 4, pp. 904–918, 2014.

[LOU 15] LOUW T., MERAT N., JAMSON H., “Engaging with highly automated driving: to be or not to be in the loop?”, 8th International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, Salt Lake City, United States, 2015.

[LYO 13] LYONS J.B., “Being transparent about transparency: A model for human-robot interaction. Trust and autonomous systems”, AAAI Spring Symposium, Stanford, États-Unis, 2013.

[MEY 14] MEYER G., DEIX S., “Research and innovation for automated driving in Germany and Europe”, in MEYER G., BEIKER S. (eds), Road Vehicle Automation, pp. 71–81, Springer, New York, United States, 2014.

[MIC 85] MICHON J.A., “A critical view of driver behavior models: What do we know, what should we do?”, in EVANS L., SCHWING R.C. (eds), Human Behavior and Traffic Safety, pp. 485–524, Springer, Boston, United States, 1985.

[NAI 01] NAIKAR N., SANDERSON P.M., “Evaluating design proposals for complex systems with work domain analysis”, Human Factors, vol. 43, no. 4, pp. 529–542, 2001.

[NAI 13] NAIKAR N., Work Domain Analysis: Concepts, Guidelines, and Cases, CRC Press, Boca Raton, United States, 2013.

[NAR 08] NARANJO J.E., GONZALEZ C., GARCIA R. et al., “Lane-change fuzzy control in autonomous vehicles for the overtaking maneuver”, IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 3, pp. 438–450, 2008.

[NAT 14] NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION (NHTSA), Human factors evaluation of level 2 and level 3 automated driving concepts: Past research, Report no. DOT HS 812 043, State of Automation Technology, and Emerging System Concepts, Washington, United States, 2014.

[NAU 16] NAUJOKS F., FORSTER Y., WIEDEMANN K. et al., “Speech improves human-automation cooperation in automated driving”, Workshop Automotive HMI – Mensch und Computer, Aachen, Germany, 2016.

[OLS 03] OLSEN E.C.B., Modeling slow lead vehicle lane changing, PhD thesis, Virginia Tech, Blacksburg, United States, 2003.

[OSO 14] OSOSKY S., SANDERS T., JENTSCH F. et al., “Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems”, Proceedings of International Society for Optics and Photonics, vol. 9084, 2014.

[PAR 00] PARASURAMAN R., SHERIDAN T.B., WICKENS C.D., “A model for types and levels of human interaction with automation”, IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, vol. 30, no. 3, pp. 286–297, 2000.

[POK 15a] POKAM R., CHAUVIN C., DEBERNARD S. et al., “Towards autonomous driving: An augmented reality interface design for lane change”, FAST-zero’15: 3rd International Symposium on Future Active Safety Technology Toward Zero Traffic Accidents, Gothenburg, Sweden, 2015.

[POK 15b] POKAM R., CHAUVIN C., DEBERNARD S. et al., “Augmented reality interface design for autonomous driving”, 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO), pp. 22–33, Colmar, France, 2015.

[POK 16] POKAM R., “Vers des logiques et des concepts de représentations : Une séance de créativité”, Congrès ErgoIA : 15ème édition sur l’Ergonomie et l’Informatique Avancée, Biarritz, France, 2016.

[RAS 86] RASMUSSEN J., Information Processing and Human–Machine Interaction: An Approach to Cognitive Engineering, Elsevier, New York, United States, 1986.

[REV 18] REVELL K., LANGDON P., BRADLEY M. et al., “User Centered Ecological Interface Design (UCEID): A novel method applied to the problem of safe and user-friendly interaction between drivers and autonomous vehicles”, Intelligent Human Systems Integration, pp. 495–501, Springer, New York, United States, 2018.

[REY 00] REYMOND G., HEIDET A., CANRY M. et al., “Validation of Renault’s dynamic simulator for adaptive cruise control experiments”, Proceedings of the Driving Simulator Conference (DSC00), pp. 181–191, 2000.

[SAL 07] SALMON P.M., REGAN M., LENNÉ; M.G. et al., “Work domain analysis and intelligent transport systems: Implications for vehicle design”, International Journal of Vehicle Design, vol. 45, no. 3, pp. 426–448, 2007.

[SAN 14] SANDERS T.L., WIXON T., SCHAFER K.E. et al., “The influence of modality and transparency on trust in human–robot interaction”, IEEE International Inter-disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014.

[SCH 15] SCHIRO J., LOSLEVER P., GABRIELLI F. et al., “Inter and intra-individual differences in steering wheel hand positions during a simulated driving task”, Ergonomics, vol. 58, no. 3, pp. 394–410, 2015.

[SOL 16] SOLOMON B., “GM invests $500 million in Lyft for self-driving car race with Uber, Tesla and Google”, Forbes, available at: https://www.forbes.com/sites/briansolomon/2016/01/04/gm-invests-500-million-in-lyft-for-self-driving-car-race-with-uber-tesla-and-google/, 4 January 2016.

[SWE 02] SWEARINGEN K., SINHA R., “Interaction design for recommender systems”, Symposium on Designing Interactive Systems, pp. 312–334, 2002.

[TRI 14] TRIMBLE T.E., BISHOP R., MORGAN J.F. et al., Human factors evaluation of level 2 and level 3 automated driving concepts: Past research, state of automation technology, and emerging system concepts, Report no. DOT HS 812 043, National Highway Traffic Safety Administration, Washington, United States, 2014.

[VIC 02] VICENTE K.J., “Ecological interface design: Progress and challenges”, Human Factors, vol. 44, no. 1, pp. 62–78, 2002.

[VYG 12] VYGOTSKY L.S., Thought and Language, MIT Press, Cambridge, United States, 2012.

Chapter written by Raïssa POKAM MEGUIA, Serge DEBERNARD, Christine CHAUVIN and Sabine LANGLOIS.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.96.105