317AutomAtion From love to wAr
topic of conversation between patients and their human caregivers)
(cf. Borenstein & Pearson, 2010; Lau etal., 2009, p. 29). In the case
of elderly people with dementia, it would often be hard to ask for
their informed consent about the use of social robots like Paro. But if
these eects could indeed be realized, the fact that they are to a cer-
tain extent “tricked” by an articial social companion might well be
regarded by their close family members as a permissible kind of white
lie, at least when the use of Paro will not be at the expense of the care
of their relatives that is carried out by humans.
Persuasive robot technology, however, also has the potential to
deceive and manipulate people, in particular vulnerable groups, such
as children and mentally less able people. For example, interaction
technology is not limited to replicating or mimicking human behav-
ior. Scientists in the eld of human–technology interaction (HTI)
are experimenting with “human responses to media technologies
that transform or augment our capabilities beyond what was previ-
ously possible, creating and studying entirely new experiences and
social phenomena” (IJsselsteijn, 2013, p. 20). Experiments with an
avatar, for example, demonstrated that hearing somebodys heart-
beat increased perceived intimacy even more than eye contact. is
seems to indicate the strong, maybe even addictive, power of articial
heartbeats, for example, in sex robots. According to HTI professor
IJsselsteijn (2013), such research illustrates “the potent transforma-
tional eects that media may have on our social interactions, deeply
aecting people’s interpersonal connectedness and behaviour” (p. 21).
7.5 Degree of Autonomy of Robots
e degree of autonomy of robots concerns the degree of control that
is delegated by the user of the robotic system to that system. is
section reects on what this delegation of control from humans to
robotic machines means for the distribution of responsibilities and
liabilities. In particular, we focus on the automotive industry and the
military, where a shift from “man-in-the-loop” to “man-on-the-loop
to “man-out-of-the-loop” (autonomous robots) is clearly visible. As a
result, the introduction of robotics in these two socio-technical prac-
tices brings along a complex set of questions in the eld of responsi-
bility and liability.
318 Just ordinAry robots
7.5.1 Systems View on Responsibility and Liability
Addressing these issues requires a systems perspective. Let us take
the example of cars. With cars, an integral legal–social and technical
system has come into place for dealing with issues of liability. is
concerns compensation rights, based on which it can be determined
who is liable in what way, varying between liability based on fault
and strict liability to product liability. Furthermore, there are legal
denitions that ensure that those who are guilty can carry the burden
of debt. ink of liability insurance, which protects insured parties
against the risk of liability. Furthermore, the owner of a car is legally
required to keep it in good repair (think of the obligatory periodic
motor vehicle inspection test) and to have it insured. Furthermore,
car manufacturers must meet all sorts of ISO standards at the product
level. Robotization can pressurize this system of distributing respon-
sibilities and liabilities in various ways. In the following, we indicate
several issues that will come up for discussion if more and more control
is delegated from humans to robot cars and/or armed military robots.
We will look at the changing roles and responsibilities of manufactur-
ers of cars and military equipment, car drivers, road authorities, drone
operators, and commanding ocers, depending on whether man is in,
on, or out of the loop.
7.5.2 Man-in-the-Loop
Car drivers and plane pilots have always been in the loop, that is, part of
a technological system. Robotization, however, is changing these socio-
technological systems and challenging the existing way of organizing
responsibilities and liabilities. For example, the introduction of task-
supporting systems calls for new types of ISO standards. Section7.3 has
already mentioned the current lack of ISO standards for such robotic
systems. us, although car manufacturers are liable when damage is
caused by a defect in their products, there is uncertainty about what
exactly they need to do to guarantee the safety of their products. e
use of military robots faces similar problems. Although manufacturers
have the moral obligation to make safe and reliable military products
so that they satisfy various legal and ethical conditions, for example, as
set out in international humanitarian law, they are rarely held respon-
sible for accidents caused by poor design (ompson, 2007). Specic
319AutomAtion From love to wAr
international legal safety guidelines, such as ISO norms, can prevent
neglect of the security aspects of armed military robots. Such a regula-
tory framework, however, is currently lacking.
e complexity of robotic systems increases the demand to record
its functioning in order to keep track of whether failures are man-
made or caused by technical deciencies, and enables that recording
to be done digitally. Cooperative driving systems provide an example
of a complex robotic system. In addition to cars, vehicle-to-vehicle
(V2V) and vehicle-to-infrastructure (V2I) communications are sub-
systems within that smart system. Such a complex system makes the
establishment of producer liability complex. Road authorities are lia-
ble if the road does not meet the requirements that may be expected
of it in the given circumstances. When roads become smarter there
might be discussion about the scope of the term “road equipment” and
thus the applicability of the liability of the road authority. It would
be most obvious to also dene intelligent wayside systems and their
underlying computer systems as road equipment. But how can it be
determined that the cause of damages can be traced to the failing of a
product rather than to an external cause?
is same problem was faced with the robotization of airplanes. To
determine who is liable for a particular accident, black boxes (ight
data recorders) were installed in airplanes in which all ight data of
the last part of the ight are stored electronically. is could also be
done with other robotic systems. In the United States, the National
Highway Trac Safety Administration (NHTSA) demanded that
producers incorporate a so-called event data recorder in all new cars
as of September 2012.* Such a black box stores, among other things,
information about the speed of the car before the accident, the use of
the brakes, and the use of seatbelts. Just like with the European eCall
system, the introduction of the event data recorder in the United States
raises many questions in the eld of privacy. While the NHTSA
asserts that this system has been introduced to increase trac safety,
others state that this has been done to resolve liability issues in partic-
ular, especially those relating to the car producers themselves.
Besides
*
http://www.nhtsa.gov/EDR, consulted on October 29, 2014.
See, for example, http://www.youtube.com/watch?v=KzYLJHgUf0k, consulted on
October 29, 2014.
320 Just ordinAry robots
recording the functioning of the car, there will also be a growing need
to record the workings of other relevant parts of the smart robotic
mobility system, such as the V2V and V2I systems.
In relation to the liability of the driver, measuring against the
perfect” motorist is usually the norm. e fact that driver assistance
systems allow drivers to better respond to hazards may aect their
liability. For example, if a system has warned about a slippery road
surface ahead, the driver can no longer claim that the circumstances
were unforeseeable. e trend, for example, in the European Union,
is that more and more driver assistance systems are becoming man-
datory. is leads to a thought-provoking tension. Guided by the
norm of the perfect driver, driver assistance systems will increase the
responsibilities of the driver. On the other hand, their reliance on
such systems makes many drivers less alert. In addition, driver assis-
tance systems can lead to de-skilling, so driving ability may deterio-
rate. Parallel to the introduction of these systems, it would be wise,
therefore, to make driving with driver assistance systems a mandatory
part of the driving license.
7.5.3 Man-on-the-Loop
Questions about the reliance of man on technology, or even over-
reliance, become even more relevant when we move from “man-in-the-
loop” to “man-on-the-loop.” In this latter situation, decisions made
by the human being will be mediated more and more by the robotic
system. A crucial question is, therefore, in what way technological
mediation inuences the actions of the operator. In making decisions,
drone operator is highly dependent on information that comes to them
from the armed military robot system. Although operators are legally
responsible, one could wonder to what extent they can reasonably be
held morally responsible for their actions.
In the case of geolocation, for example, the drone operator’s decision
relating to life-and-death situations seems to depend more on articial
intelligence (triggered by an electronic signal) rather than on human
intelligence. e drawback of this kind of mediation is that operators
will no longer make moral choices, but will simply exhibit inuenced
behavior—because subconsciously they will increasingly rely, and even
over-rely, on the military robot (Cummings, 2006). is could even
321AutomAtion From love to wAr
blur the line between nonautonomous and autonomous systems, as the
decision of the human operator is not the result of human deliberation,
but is mainly determined or even enforced by a military robot.
7.5.4 Man-out-of-the-Loop
Although operating an autonomous car on public roads is far more
complex than ying an airplane, automotive industry consultant IHS
Automotive (2014) predicts that the share of autonomous cars will
grow from about 1% to 9% of the cars sold worldwide in 2025 and
2035, respectively. According to these predictions, more than 50 mil-
lion driverless cars will be driving around the world in 2035.
Before such a scenario can become reality, a lot of testing needs to
be done, not only on closed test circuits but also on the public road.
Various governments are setting up laws to enable experiments with
self-driving cars that would otherwise be illegal and uninsurable.
Pleas for the establishment of free experimental zones can be found
with respect to various types of robots. In Japan, Tsukuba City has
been designated an experiment zone for the development of mobile
and service robots. is is linked to the establishment of a Robot
Safety Center for the development of safety standards and a certica-
tion system for service robots (Flatley, 2010).
Unlike car drivers, people who are transported by these autonomous
robot cars are released from their responsibility to avoid accidents.
at responsibility has shifted completely to the developers and opera-
tors of the robotic system: the car manufacturer and road authorities.
Manufacturers of autonomous cars become types of common carriers,
such as railways and taxicab companies, that are held to owe passen-
gers the highest standard of care and are liable for even the slightest
negligence.
When, in the future, the army employs fully autonomous armed
robots, there will still be people involved who “plan the operation,
dene the parameters, prescribe the rules of engagement, and deploy
the system” (Quintana, 2008). If a robot does not operate within the
boundaries of its specied parameters, it is the manufacturer’s fault.
Moreover, the ancient Doctrine of Command Responsibility implies
that the commander is responsible for both the legal and the illegal
use of the autonomous robot.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.75.169