320 Just ordinAry robots
recording the functioning of the car, there will also be a growing need
to record the workings of other relevant parts of the smart robotic
mobility system, such as the V2V and V2I systems.
In relation to the liability of the driver, measuring against the
“perfect” motorist is usually the norm. e fact that driver assistance
systems allow drivers to better respond to hazards may aect their
liability. For example, if a system has warned about a slippery road
surface ahead, the driver can no longer claim that the circumstances
were unforeseeable. e trend, for example, in the European Union,
is that more and more driver assistance systems are becoming man-
datory. is leads to a thought-provoking tension. Guided by the
norm of the perfect driver, driver assistance systems will increase the
responsibilities of the driver. On the other hand, their reliance on
such systems makes many drivers less alert. In addition, driver assis-
tance systems can lead to de-skilling, so driving ability may deterio-
rate. Parallel to the introduction of these systems, it would be wise,
therefore, to make driving with driver assistance systems a mandatory
part of the driving license.
7.5.3 Man-on-the-Loop
Questions about the reliance of man on technology, or even over-
reliance, become even more relevant when we move from “man-in-the-
loop” to “man-on-the-loop.” In this latter situation, decisions made
by the human being will be mediated more and more by the robotic
system. A crucial question is, therefore, in what way technological
mediation inuences the actions of the operator. In making decisions,
drone operator is highly dependent on information that comes to them
from the armed military robot system. Although operators are legally
responsible, one could wonder to what extent they can reasonably be
held morally responsible for their actions.
In the case of geolocation, for example, the drone operator’s decision
relating to life-and-death situations seems to depend more on articial
intelligence (triggered by an electronic signal) rather than on human
intelligence. e drawback of this kind of mediation is that operators
will no longer make moral choices, but will simply exhibit inuenced
behavior—because subconsciously they will increasingly rely, and even
over-rely, on the military robot (Cummings, 2006). is could even