253Armed militAry drones
these robots are programmed to perform or not to perform certain
tasks, with an operator controlling activities to varying degrees.
Nonautonomous robots require humans to authorize any decision to
use lethal force, that is, they require a “man-in-the-loop.”
When an armed military robot performs tasks and decisions com-
pletely independently that relate to whether to destroy military targets,
thus without human intervention, we speak of autonomous systems.
ese autonomous systems have explicit task programming and act
according to a certain xed algorithm. is means that the acts of the
autonomous military robot are predictable and can be traced afterward.
Learning military robots, based on neural networks, genetic algo-
rithms, and agent architecture, are able to decide on a course of action and
to act without human intervention. e rules by which they act are not
xed during the production process, but can be changed during the oper-
ation of the robot by the robot itself (Matthias, 2004). e problem with
these robots is that there will be a class of actions in relation to which no
one is capable of predicting the future behavior of these robots. So these
robots would become a “black box” for dicult moral decisions, prevent-
ing any second-guessing of their decisions. e control is then transferred
to the robot itself, but it is nonsensical to hold the robot responsible at that
moment, since robots that will be built in the next two decades will not
possess anything like intentionality or a real capability for agency. e
deployment of armed learning military robots would constitute a respon-
sibility gap (Matthias, 2004), since it would constitute the injustice of
holding people responsible for the actions of robots over which they could
not have any control.* Although learning armed military robots appear
high on the U.S. military agenda (Sharkey, 2008a), the deployment of
these robots is, at least under present and near-term research develop-
ments, not likely to happen within the next two decades (Arkin, 2009a).
†
We will not discuss this type of military robot in this chapter, because
they will not be developed within the coming decades, and statements
about these learning robots would be very speculative.
*
For a discussion of possible mechanisms and principles for the assignment of moral
responsibility to autonomous learning (intelligent) robots, we refer to Hellström
(2013) and Sparrow (2007).
†
Barring some signicant breakthrough in articial intelligence research, situational
awareness cannot be incorporated in software for lethal military robots (Fitzsimonds
& Mahnken, 2007; Gulam, 2006; Kenyon, 2006; Sharkey, 2008a; Sparrow, 2007).