16 Just ordinAry robots
accomplished scientists, artists, composers, and personal companions”
(cited in Noble, 1997, p. 157). Simon and Newell (1958) predicted in
the same year that within 10years a computer would beat the world
chess champion, discover and prove important new mathematical
theorems, and compose beautiful music.
1.2.4.3 rough Trial and Error However, none of this went very
quickly or smoothly. Over the years, the development of AI has wit-
nessed many ups and downs. In this process, changes in thinking about
AI go hand in hand with advances in brain and cognitive science (Böhle
etal., 2011, pp. 129–132). In the 1960s, the AI community assumed
that any form of intelligence could be mimicked by a computer code,
for example. However, they slowly but surely ran into the counterin-
tuitive situation in which computers had relatively little diculty in
solving geometric problems that are very dicult for most people. In
contrast, the computer appeared to experience great diculty in mat-
ters that are trivial for humans, such as recognizing faces. is setback
led to a sharp decline in interest, in particular on the part of the U.S.
government, in stimulating AI research. All of the 1970s were charac-
terized by this so-called AI winter.
In the early 1980s, the emergence of expert systems led to new, high
expectations. is type of system is based on the idea that experts make
decisions based on a set of clear rules. At the same time, expert sys-
tems are dependent on large databases full of things that are common
knowledge for people, such as words from various languages or the
names of famous people. In the mid-1980s, neural networks became
popular. Since the beginning of the 1990s, there have been systems
on the market that can recognize characters and voices using neural
networks. Such networks, however, need be trained. Skills are learned
through reward and punishment. In this type of reinforcement learn-
ing, the robot is rewarded with points, and it is programmed so that
it strives for as many points as possible. Robot researchers often have
a hard time designing the appropriate reward and punishment system
of robots. At Delft Technical University (TU Delft), they tried to
teach a two-legged robot called Leo to walk (see Figure 1.6). Initially,
the researchers punished Leo when he fell over. is meant, however,
that Leo learned not to fall (Schuitema, Wisse, Ramakers, & Jonker,
2010), which he did by putting one leg on hisneck. A reward for
17robots everywhere
good walking behavior appeared to work better, but led to all kinds of
strange ways of walking. After a long time, the researchers discovered
that when one rewards the robot for ecient energy use, it starts to
learn the human way of walking.
Another important AI product is the “intelligent agent.” is
is “a computer system that is capable of exible autonomous action
in dynamic, unpredictable, typically multi-agent domains” (Luck,
McBurney, Shehory, & Willmott, 2005, p. 11). ese are computer
programs that can “observe” their environment and can autonomously
calculate and thus aect their environment. is approach is seen as an
important new paradigm for software development. In the late 1980s, a
new AI approach emerged, the situated or “embodied” AI. It assumed
that intelligence is built from the ground up by trial and error and that,
in addition, the computer really needs a “body” to actually get to know
the world. is AI approach thus provides an additional motivation to
build a robot.
1.2.4.4 Brute Computational Power What has become of the expecta-
tions of the very rst AI experts? e second Industrial Revolution
that Wiener predicted has come and still rages on with the rise of
the industrial robot. In the meantime, a computer has also beaten the
Figure 1.6 Leo, the two-legged robot developed by TU Delft. (Photo courtesy of Delft University
of Technology, the Netherlands.)
18 Just ordinAry robots
world chess champion. at did not happen within the 10-year time
span predicted by Simon and Newell (1958), but took nearly 40years.
In particular, the lack of computing power played tricks on AI for
quite a while. A decade-long exponential increase in the speed and
processing power of computers, the so-called Moore’s Law, now makes
much possible on the basis of brute computational power. In March
1997, the Deep Blue II chess computer defeated the then world chess
champion Gary Kasparov. e match of six games ended with a score
of 3.5–2.5. e chess Grandmaster complained afterward that there
had been people playing the game. According to Kasparov, one par-
ticular move was clearly too stupid for a computer, while another had
been too creative. Other chess computers, however, proved able to
produce similar moves. One year later, Professor David Cope showed
a program that analyzes the musical style of old masters such as Bach
and Stravinsky.* On this basis, the program, EMI (Experiments in
Musical Intelligence), creates synthetic classical music. A symphony
in the style of Mozart, entitled Austrian Composer’s 41st, has already
been performed. Only real connoisseurs could distinguish the EMI
symphony from a real Mozart piece. What is perhaps an even stron-
ger example of AI or creativity was created in 2009. e universities
of Aberystwyth and Cambridge designed an articial scientist. e
scientic Adam robot was the rst robot to independently discover
a number of new scientic ndings. is robot discovered a gene in
yeast that had been hunted for by researchers for decades (Ravilious,
2009). In the same year, researchers from Cornell University devel-
oped a computer program that deduced Newton’s laws of motion
from the motion of a pendulum (Keim, 2009). It is hoped that in the
future, computers will be capable of discovering laws of nature that
are as yet unknown.
1.2.4.5 Articial Social Intelligence Earlier, we indicated that robot
experts want to develop robots with a human-like body, because our
physical environment is adapted to our human dimensions. With
respect to the robots behavior, an identical argument has been in
use since the mid-1990s. It is suggested that if robots are to oper-
ate in human environments, it is important that these machines are
*
http://www.computerhistory.org/atchm/algorithmic-music-david-cope-and-emi/.
19robots everywhere
so programmed that they are able to interact socially with people
and that they can also act in a moral way. e study of interactions
between humans and robots is called human–robot interaction (HRI).
In this multidisciplinary eld of research, we see the emergence of
concepts such as social robots, articial (robotic) companions, and arti-
cial moral agents.
According to sociologist Sherry Turkle (2011), many of us have
been philosophically and emotionally prepared by now to “seriously
consider machines as potential friends, condants and even roman-
tic partners” (p. 9). She claims that we stand on the verge of con-
sidering articial human companions as a normal part of our lives,
and coined this the “robotic moment” in human history. e tech-
nologies that are built to interact with humans via humans’ social
rules are referred to as social robots or articial (robotic) compan-
ions. ese machines are embodied articial agents, either as virtual
robots (avatars) or as physical robots. Floridi (2014) sees articial
companions evolving in various directions. Like pets, he believes
that articial companions will address social needs and the human
desire for emotional bonds and playful interactions. Articial com-
panions will also provide information-based services in various con-
texts, such as communication, entertainment, education, training,
health, and safety (pp.154156). Researchers also aim to develop
articial agents that are able to deal with human emotions. Floridi
expects that articial companions will act as “memory stewards,
creating and managing a repository of information about their own-
ers, even to the point that they—based on long-term life-logging—
will be able to simulate a person.
Social robotics is still in an early phase of development. Fong,
Nourbakhsh, and Dautenhahn (2003) dene three classes of social
robots or goals for social robot research and development: socially
situated, socially embedded, and socially intelligent (p. 145). Socially
situated robots can perceive and react to a certain social environment.
For example, they are able to distinguish between other social agents
and various objects in that environment. Socially embedded robots
are structurally linked to a certain social environment. ey are able
to interact with agents and humans and are at least partially aware
of human interactional structures (e.g., taking turns). ese rst two
goals also play a central role in the so-called ambient intelligence
20 Just ordinAry robots
(AmI) vision.* AmI literally means that people are surrounded by
intelligent equipment. A smart environment not only knows that peo-
ple are present but also who is present and what characteristics, needs,
emotions, and intentions they have. e founding fathers of AmI are
Aarts and Marzano (2003) from Philips. At the beginning of this
century, this vision was a way to shape the R&D agenda of Philips
and the European Commission. Today, many world-leading IT rms,
such as Microsoft, have embraced it and proclaimed that the “era of
ambient intelligence” has begun (Sandoval, 2014).
e ultimate goal of social robotics is to develop socially intelli-
gent robots, that is, “robots that show aspects of human style social
intelligence, based on deep models of human cognition and social
competence” (Fong etal., 2003, p. 145). Human social characteristics
that engineers try to incorporate in machines are recognizing faces
and emotions, expressing emotions, communicating using high-level
dialogue, establishing/maintaining social relationships, using natu-
ral cues such as gaze and gestures, exhibiting distinctive personality
and character, and having the ability to learn social competencies.
Although developing these features will be very challenging, Fong
etal. believe that modern technology will make it increasingly pos-
sible to interact with robots in a rened manner.
For a long time, we have seen the emergence of the social virtual
robot (also called the softbot) or the chatbot. e chatbot is a so-called
intelligent agent. On the IKEA website, you can put questions to
Anna, the virtual assistant. Among the most developed chatbots are
Cleverbot and Eugene Goostman. It is even claimed that the Cleverbot
passed the Turing test during a technology festival in India in 2011
(Aron, 2011). Based on 5-minute online chats, 10 out of 30 judges at
the Royal Society in London concluded that the program “Eugene”
was a 13-year-old Ukrainian boy (Sample & Hern, 2014). Both
computer programs make use of earlier answers to similar questions
from people that can be found on the Internet when generating their
answers. To get an impression of the state of this technology, it is
instructive to watch a video on YouTube, in which a conversation
between two Cleverbots can be seen (Labutov, Yosinski, & Lipson,
2011). e conversation is realistic, but at the same time it is also
*
e resemblance between the abbreviations AmI and AI is not a coincidence.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.43.161