274 Just ordinAry robots
psychological discomfort for the war operator. is cure may have some
unwanted side eects though. Showing abstract images would in fact
dehumanize the enemy, and as a result could desensitize military per-
sonnel operating unmanned systems. e danger of this development is
that operators take decisions about life and death as if they are playing a
video game (Sparrow, 2009), as evidenced by the words of an operator:
It’s like a video game. It can get a little bloodthirsty. But it’s fucking
cool” (Singer, 2009b, pp. 308–309). e question this raises is whether
operators can be held morally responsible for their decisions. Moral
responsibility implies that they must have complete control over their
behavior. at is, they fully know the consequences of their decisions
and they take their decisions voluntarily (Fischer & Ravizza, 1998).
is requires ethical reection. Interfaces that only indirectly show
abstract images of the enemy and military targets cause the operator
to become less than fully aware of the consequences of his or her deci-
sions. So in order to prevent stress, the ethical reection of the operator
is reduced or even eliminated (Royakkers & van Est, 2010). In short,
these types of interfaces tend to dehumanize the operator.
Coeckelbergh (2013) shows that surveillance technologies can
also enable a kind of “empathic bridging” between the operator and
potential targets and thus can weaken the danger of dehumaniza-
tion. Operators usually intensively watch potential targets, under-
taking a wide range of normal human activities, including eating,
smoking, and interacting with friends and family, before eventu-
ally attempting to kill them. By zooming in on the potential targets
and watching what they are doing and what happens to them when
they are bombed, the operator gains “a certain intimacy” (Bumiller,
2012): “I see mothers with children, I see fathers with children, I see
fathers with mothers, I see kids playing soccer” (an operator quoted
in Bumiller, 2012). As we said earlier, this, in turn, can lead to very
stressful situations, since it is not so easy to kill a target who has
become more of a person to the operator or whose image the opera-
tor can recall (see also Fitzsimmons & Sangha, 2013). In a survey
of 900 drone crew members conducted by the U.S. Air Force in
2010 and 2011, 46% of drone pilots on active duty reported high
levels of stress and 29% reported emotional exhaustion or burnout.*
*
https://forums.gunbroker.com/topic.asp?TOPIC_ID=554910.
275Armed militAry drones
As Coeckelbergh (2013) concludes, more (empirical) research is
needed to better understand drone ghting practices and the psycho-
logical experience of droneoperators.
Another aspect is that decisions will be more and more mediated in
the future by the armed military robot. Military robots are becoming
more and more autonomous through articial intelligence (AI) tech-
nology and often include “ethical governors” in which international
humanitarian law has partially been programmed. rough ethical
governors, the tele-led armed military robots may in the future cor-
rect the operator or may provide advice when the operator is decid-
ing whether to use a weapon. Arkin’s study (2009b) focuses on this.
Ethical governors tell operators which type of ammunition should
be used for the intended purpose and predict the resulting amount
of damage. If the operator makes a decision that would result in too
much collateral damage, the governor gives a warning and will, for
example, block the bomber. e operator can override this, but he
knows that there is a high probability that he is violating international
humanitarian law and that he risks being tried for a war crime. e
drawback of this technological mediation (see Verbeek, 2005) is that
operators will no longer make moral choices but will simply exhibit
inuenced behavior—because subconsciously they will increasingly
rely, and even over-rely, on the military robot (Cummings, 2006).
As a consequence, a “moral buer” may come into being between
human operators and their actions, allowing human operators to tell
themselves that it was the military robot that took the decision. is
could blur the line between nonautonomous and autonomous sys-
tems, as the decision of a human operator is not the result of human
deliberation, but is mainly determined or even enforced by a military
robot. is would mean a shift from “controlling” to “supervising,
and eectively the military robot would take autonomous decisions.
is is provoked by the increase in the amount of information from
dierent sources, which has to be integrated and then interpreted
quickly in order to come to a decision. Military robots can do this
more eectively and eciently than people, for whom this is almost
impossible. e operator will still be “on-the-loop” and will have the
power to veto the systems ring actions.
According to Human Rights Watch (2012), “on-the-loop” will soon
be rendered meaningless when the operator is given only a fraction of
276 Just ordinAry robots
a second to make the veto decision, as is the case with several sys-
tems already in operation. is would imply that we could no longer
hold a human operator reasonably responsible for his decisions, since
it would not really be the operator taking the decisions but a military
robot. is could have consequences for the question of responsibility
in another way too. Detert, Treviño, and Sweitzer (2008) have argued
that people who believe that they have little personal control in cer-
tain situations—such as those who carry out monitoring—are more
likely to go along with rules, decisions, and situations even if they are
unethical or have harmful eects.
In addition, Vallor (2013) argues that the shift from “in-the-loop
to “on-the-loop” leads to a dangerous moral de-skilling of the military
profession. She states that an “expert supervisor of another’s deci-
sion, in order to be worthy of the authority to override it, must have
acquired expertise in making decisions of the very same or a similar
kind” (Vallor, 2013, p. 483). e question then is “[H]ow would such
a supervisor ever become qualied to make that judgement, in a pro-
fessional setting where the decision under review is no longer regu-
larly exercised by humans in the rst place?” (Vallor, 2013, p. 483).
at the shift “from in-the-loop to on-the-loop to out-of-
the-loop” has become an actual threat follows from geolocation,
where drone strikes are dependent on electronic signal intelligence
rather than on human intelligence and the decision about life and
death does not actually need a man-in-the-loop or on-the-loop
(see Section 6.3.1).
6.4.3 Responsibility of the Commanding Ocer
Even if there is not a human operator directly controlling the
robot, there is still a human agent that has decided whether or not
to deploy this robot: “even if a system is fully autonomous, it does
not mean that no humans are involved. Someone has to plan the
operation, dene the parameters, prescribe the rules of engage-
ment, and deploy the system” (Quintana, 2008, p. 15). So, in the
case of geolocation too, someone has to order that a drone is to be
sent out to target a mobile phone by geolocation. e basis of this
argument is the doctrine of command responsibility, and although
this ancient doctrine is interpreted dierently by dierent authors
277Armed militAry drones
(Garraway, 2009), it will usually cover the deployment of armed
military robots. As Schulzke (2013, p. 215) puts it:
Commanders should be held responsible for sending AWS [autono-
mous weapon systems] into combat with unjust or inadequately formu-
lated ROE [rules of engagement], for failing to ensure that the weapons
can be used safely, or for using AWS to ght in unjust conicts, as all
of these conditions that enable or constrain an AWS are controlled by
the commanders.
For example, the possibility that an autonomous drone may engage
the wrong targets could be an acknowledged limitation of the system.
If the designers have made this clear to those who have purchased
or deployed the system, then, Sparrow (2007) argues, they can no
longer be held responsible should this occur; in that case, the respon-
sibility should be assumed by the commander who (wilfully and
knowingly) decided to send the drone into the battleeld despite its
known limitations.
To conclude this section with respect to the question “who can be
held responsible when an autonomous armed robot is involved in an
act of violence?,” we quote Krishnan (2009, p. 105): “the legal prob-
lems with regard to accountability might be smaller than some critics
of military robots believe. If the robot does not operate within the
boundaries of its specied parameters, it is the manufacturer’s fault.
If the robot is used in circumstances that make its use illegal, then it
is the commander’s fault.” at the future will include more autono-
mous systems seems almost a given, and although renouncing certain
types of autonomous robots might be a good idea for many reasons,
a lack of clarity as to who is responsible for their use is thus probably
not among them (see also Kershnar, 2013). Whether one would want
to have that responsibility is a dierent question altogether.
6.5 Proliferation and Security
e rst signs of an international arms race in relation to military
robotics technology are already visible. All over the world, sig-
nicant amounts of money are being invested in the development
of armed military robots. is is happening in countries such as
278 Just ordinAry robots
the United States, Britain, Canada, China, South Korea, Russia,
Israel, and Singapore. Proliferation to other countries, for exam-
ple, by the transfer of robotics technology, materials, and knowl-
edge, is almost inevitable. Many state and non-state actors that are
hostile to the United States have also begun to enter the area of
UAV technology. Iran has, for example, developed its own armed
drone, called the Ambassador of Death, which has a range of up to
1000kilometers (or 600 miles).* at drones are within the reach
of many state and non-state actors is because, unlike other weapon
systems, the research and development of armed military robots is
fairly transparent and accessible. Furthermore, robotics technol-
ogy is relatively easy to copy and the necessary equipment to make
armed military robots can easily be bought and is not too expen-
sive (Horton, 2009; Singer, 2009b) (see Box 6.5).
In addition, much of the robotics technology is in fact open-source
technology and is a so-called dual-use technology; it is, thus, a tech-
nology that in future will potentially be geared toward applications in
both the military and the civilian market. One threat is that in future
certain commercial robotic devices, which can be bought on the open
market, could be transformed relatively easily into robot weapons.
Chances are that unstable countries and terrorist organiza-
tions will deploy armed military robots. Singer (2009a) fears that
armed military robots will become the ultimate weapon of strug-
gle for ethnic rebels, fundamentalists, and terrorists. Noel Sharkey
(2008c) also predicts that soon a robot will replace a suicide bomber.
According to Sharkey (2008c), “the spirit [is] already out of the
bottle.” International regulations on the use of armed military robots
will not solve this problem, as terrorists and insurgents disregard
international humanitarian law.
An important tool to curb the proliferation of armed military
robots is obviously controlling the production and purchase of
these robots by implementing global arms control treaties. A major
problem with this is that countries such as the United States and
China are not parties to these treaties. In addition, legislation is
needed in the eld of the export of armed military robots in the UN
*
http://www.dailymail.co.uk/news/article-1305221/Ahmadinejad-unveils-Irans-
long-range-ambassador-death-bomber-drone.html.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.194.130