302 Just ordinAry robots
longer need the skills of a traditional air pilot to y the aerial system,
since the ying itself is done autonomously by the drone. eteleop-
erator can focus on other tasks, such as surveillance or targeting an
enemy. In a similar way, operating a small recreational drone requires
fewer skills than ying a traditional radio-controlled model airplane.
Just as digital cameras have made it possible for nonprofessional people
to make technically very good pictures with the help of robotics, peo-
ple do not have to be fanatic hobbyists to teleoperate a model airplane.
us, in the case of domestic dronesoperated through robotics, more
people can act as the man-in-the-loop. Furthermore, technological
systems are increasingly advising human operators on which action
must be taken. is we call “man-on-the-loop.” In other words, there
is a shift from robots controlled remotely by operators to robot systems
that advise operators and to more and more autonomous systems.
Our study shows that the level of autonomy reached by a robotic
system strongly depends on the way the socio-technical environ-
ment it works in is structured. Current autonomous robots, such
as the robotic vacuum cleaner, can only work successfully if we
adapt our living space to its limited capacities. In industrial robot-
ics, the factory was redesigned in such a way that robots could be
used to take over simplied human tasks. Similarly, the successful
use of robots outside the factory does not depend on the engineer-
ing ability to build a robot with human-like physical and mental
capabilities, as promoted within the strong AI vision, as much as
on smartly (re)framing a certain social practice and developing a
socio-technical environment within which a “simple” robot can
work. For example, trying to let a machine simulate the way human
beings iron is currently a route to disaster. Instead, engineers rede-
ned ironing so that it was suitable for a machine to do. In addition,
our rapidly evolving technological systems, from communication to
sensor networks, are increasingly enveloping our daily environment
into an information and communication technology (ICT) friendly
infosphere (cf.Floridi, 2014, p. 144). It is the gradual, steady devel-
opment of these large technological systems (Hughes, 1983) that
power and limit the use of robots in the social domains that were
studied: global positioning systems (GPS) enable drones in the city
and on the battleeld, and the use of care robots will hinge on the
advancement of home automation systems.
303AutomAtion From love to wAr
7.1.3 Exploring Articial Social Intelligence
With regard to the ambition to build socially intelligent and mor-
ally competent robots, a similar argument can be made. In the short
or medium term, it is not expected that robots will display elaborate
social and moral behavior. Developing socially intelligent robots, that
is, “robots that show aspects of human style social intelligence, based
on deep models of human cognition and social competence” (Fong,
Nourbakhsh, & Dautenhahn, 2003, p. 145), is a very long-term goal.
ere is also strong doubt whether it is at all possible to build ethical
decision making into machines in a human-like, nonreductionist way.
Despite this, many R&D eorts are put into making ICT more
“social.” is goal is part and parcel of the vision of ambient intelli-
gence (AmI), which has been driving the ICT research agenda of, for
example, the European Commission and many big ICT rms since
the start of this century. In this vision, humans are surrounded by
“smart environments,” with computers that are aware of which person
is present and what characteristics, emotions, needs, and intentions
they have. A smart environment can adjust to, react to, and anticipate
this. e AmI vision has a signicant impact on the development of
ICT in the elds of care, mobility, and the domestic environment.
For example, the AmI vision oers a new view on the way in which
we should deal with our health in the future: we should have per-
sonal health care that is automated as much as possible (Schuurman,
Moelaert El-Hadidy, Krom, & Walhout, 2009). Emphasis within
the AmI vision originally lay on information, communication, and
amusement. Under the inuence of robotics, the past few years have
seen increased attention on the automation of physical tasks. e ulti-
mate goal is a well-educated robot that can help humans with every-
day tasks. us, robotics has become a supportive element of the AmI
vision. As a consequence, the goals of developing socially situated
robots (robots that can perceive and react to a certain social environ-
ment) and socially embedded robots (robots that are able to interact
with agents and humans, and are at least partially aware of human
interactional structures, such as taking turns) have become integral
parts of the AmI vision.
Human–machine interaction is an important part of mod-
ern robotics research. Contemporary robots are still very limited
304 Just ordinAry robots
and predictable in their social interaction. Again, the step-by-step
improvement of the interaction between man and machine does
not depend on strong AI breakthroughs. e current route forward
tries to circumvent the currently limited capacities of ICT by mak-
ing use of the social intelligence of human beings. For example,
when the most developed chatbots, such as Cleverbot* and Eugene
Goostman,
generate their answers, they make use of earlier answers
to similar questions that have been put to people and that can be
found on the Internet. is is known as human-based computation,
since the computer makes use of the way in which groups of people
have solved problems before, or data-driven (weak) AI (cf. Nielsen,
2011). e social performance of these chatbots thus depends on a
large socio-technical system, namely, the Internet, or the social web.
More importantly, as Floridi (2014, p. 149) explains, “[T]he innu-
merable hours that we spend and keep spending rating and ranking
everything that comes our way are essential to help our smart yet
stupid ICTs to handle the world in apparently meaningful ways.
Besides playing a role in shaping the way the robot acts, human
intelligence plays another important semantic role in the way we
interact with robots. Namely, people are capable of lling in much of
the social interaction with a robot themselves: we anthropomorphize
the robot. Moreover, people have no clearly described expectations
of such machines and consider them as toys to be played with and to
have fun with.
7.1.4 Exploring Articial Moral Intelligence
At the moment, a machine that acts like a full ethical agent, which
can make explicit ethical judgments and is generally competent to
reasonably justify them, belongs to the realm of science ction. e
current scientic debate concerns the feasibility and desirability
of implicit and explicit articial ethical agents. Whereas implicit
ethical agents are designed so that they implicitly promote ethical
behavior, explicit ethical agents have human morality encoded in
their software.
*
http://en.wikipedia.org/wiki/Cleverbot.
http://en.wikipedia.org/wiki/Eugene_Goostman.
305AutomAtion From love to wAr
Designing information technology that accounts for human values
is widely accepted as a legitimate activity. However, to include ethical
aspects into the design of robotic systems, one must rst acknowledge
that developing and applying robots is normative and that all robots
should, therefore, be considered ethical impact agents that should be
evaluated in terms of their ethical consequences. is book tries to
strengthen that awareness and gives an overview of the state of the art
in relation to the debate on the potential ethical implications of robot-
ics in a broad set of social practices. It is important to identify the
signicant moral values involved in the social practice that is being
redesigned and then to follow this with an operationalization of these
values in the robot design (cf. Van Wynsberghe, 2013).
e question of to what extent building ethical decision making
into machines is feasible and/or morally acceptable is a hot topic for
debate among a small group of ethicists. e dierence between the
two approaches appears to resemble the distinction between weak and
strong AI. In the strong AI version, morality can be encoded in soft-
ware even in such a way that the robot behaves “more ethically” than,
for example, a human driver or soldier would if confronted with the
same situation. In the pragmatic AI version, the issue of whether a
machine can “behave ethically” is not considered an important one.
Instead, it is crucial that robots can function safely and can perform
their assigned missions eectively, including following instructions
that comply with the law.
e expectation that robots will be better drivers than humans is
widespread. Also, in the eld of decision making concerning life-
and-death situations on the battleeld, some believe that future
robots will be more capable of this than humans. Autonomous action
also implies that such machines are expected to act “morally.” For
example, autonomous car robots must heed trac regulations and
military robots must act according to the Geneva Conventions.
Driven by these future socio-technical imaginaries, various attempts
to build ethical decision making into machines have been made. e
most well known is Ronald Arkin’s army-funded work on the prob-
lem of how to make drones capable of dealing with the complicated
ethics of wartime behavior. Arkin, Ulam, and Duncan (2009, p. 1)
proposed the concept of an “ethical governor,” which is supposed to
be “capable of restricting lethal action of an autonomous system in a
306 Just ordinAry robots
manner consistent with the Laws of War and Rules of Engagement.
It is not yet clear, however, to what extent it is feasible to build such
explicit ethical agents. In particular, the eort to encode morality into
machine software seems to be in conict with the essential nonre-
ducibility of human ethical reasoning. In other words, there is strong
doubt whether computers will ever be able to deal in an appropriate
way with the ethical frame problem.
e core of the pragmatic approach toward building robotic sys-
tems that take account of human values and laws is nding ways to
circumvent this ethical frame problem. is approach does not so
much depend on the engineering ability to build articial morality,
but on smartly (re)framing a certain socio-technical practice in such
a way that a “simple” robot can act as an implicit ethical agent. For
example, in the chapter on military robots, how to enable machines
to comply with the principle of discrimination in the law of war was
discussed. One option to considerably reduce the possibility that
armed autonomous robots could attack civilians was to deploy them
only in places where civilians are not allowed. Canning (2006) sug-
gests building robots that only attack enemy ghters who wear hostile
weapon systems. However, reducing the frame problem in this way
can actually lead to ethical problems, for example, in circumstances
in which citizens usually carry a weapon and are therefore dicult to
distinguish from armed enemy insurgents. In fact, targeted killing
through tele-led armed drones by means of locating intended targets
by tracking their mobile phones is a topical example of reducing the
problem of distinguishing between an enemy and a civilian. Reducing
the frame problem in this way is also unethical, since it is not certain
whether the individual in possession of a tracked mobile phone is in
fact the intended target. To conclude, it is a real challenge to simplify
ethical frame problem in an ethically appropriate fashion. e danger
of a technological imperativedoing things because they are feasible
and not because they are desirable—always lurks around the corner.
7.2 Expected Social Gains
Robotization presents a way of rationalizing social practices
and reducing their dependence on people (cf. Ritzer, 1983).
Rationalization can have many benets: greater eciency, less
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.203.175