21robots everywhere
very strange. It seems as if we are dealing witha kind of uncanny val-
ley, in the sense that the language seems real, but not the social inter-
action. e conversation between these two computers is, however,
not at all creepy, but is rather humorous.
Social interaction between people concerns not just verbal infor-
mation, but, more importantly, nonverbal communication as well;
think about posture or emotions that can be read from facial expres-
sions. “Aective computing” deals with this area of the human–
machine interaction process. According to one of the founders of
this eld, Rosalind Picard (1995) of MIT, we are dealing with “com-
puting that relates to, arises from, or inuences emotions” (p. 1). e
goal is that computers learn to recognize human emotions and learn
to adapt their behavior on that basis. To this end, aective comput-
ing analyzes aspects such as intonation of the voice, gestures that
people make, bodily posture, and facial expression. For example, the
Dutch company Noldus has developed FaceReader, which is used
regularly by marketing researchers. is technology uses the Facial
Action Coding System (FACS). is was developed by Paul Eckman, a
renowned psychologist, who as far back as the 1970s suggested that
there are six basic human emotions—anger, disgust, fear, happiness,
sadness, and surprise—all of which can be read from the face within
a millisecond. is coding can also be used so that avatars, softbots,
or real robots can show emotions. It is expected that the user friend-
liness, and thus the acceptance, of such technologies will increase
(Picard & Klein, 2002).
1.2.4.6 Artificial Morality e question of whether not only social
behavior but also moral behavior can be programmed into comput-
ers is currently being discussed. is is quite clearly a very recent
scientic eld. At the beginning of this introductory chapter,
there was a reference to the three Asimov ethical laws that Asimov
robots are supposed to comply with. Especially in the eld of mili-
tary robots, there is reection on the use of robots, which should
behave according to international humanitarian law, as dened in
the Geneva Convention. Ronald Arkin (2009) assumes that it is
possible to develop robots that can make better decisions under
combat conditions than human soldiers. He proposes not only that
AI is independent of emotions, as it is only based on logic, but at