16 Just ordinAry robots
accomplished scientists, artists, composers, and personal companions”
(cited in Noble, 1997, p. 157). Simon and Newell (1958) predicted in
the same year that within 10years a computer would beat the world
chess champion, discover and prove important new mathematical
theorems, and compose beautiful music.
1.2.4.3 rough Trial and Error However, none of this went very
quickly or smoothly. Over the years, the development of AI has wit-
nessed many ups and downs. In this process, changes in thinking about
AI go hand in hand with advances in brain and cognitive science (Böhle
etal., 2011, pp. 129–132). In the 1960s, the AI community assumed
that any form of intelligence could be mimicked by a computer code,
for example. However, they slowly but surely ran into the counterin-
tuitive situation in which computers had relatively little diculty in
solving geometric problems that are very dicult for most people. In
contrast, the computer appeared to experience great diculty in mat-
ters that are trivial for humans, such as recognizing faces. is setback
led to a sharp decline in interest, in particular on the part of the U.S.
government, in stimulating AI research. All of the 1970s were charac-
terized by this so-called AI winter.
In the early 1980s, the emergence of expert systems led to new, high
expectations. is type of system is based on the idea that experts make
decisions based on a set of clear rules. At the same time, expert sys-
tems are dependent on large databases full of things that are common
knowledge for people, such as words from various languages or the
names of famous people. In the mid-1980s, neural networks became
popular. Since the beginning of the 1990s, there have been systems
on the market that can recognize characters and voices using neural
networks. Such networks, however, need be trained. Skills are learned
through reward and punishment. In this type of reinforcement learn-
ing, the robot is rewarded with points, and it is programmed so that
it strives for as many points as possible. Robot researchers often have
a hard time designing the appropriate reward and punishment system
of robots. At Delft Technical University (TU Delft), they tried to
teach a two-legged robot called Leo to walk (see Figure 1.6). Initially,
the researchers punished Leo when he fell over. is meant, however,
that Leo learned not to fall (Schuitema, Wisse, Ramakers, & Jonker,
2010), which he did by putting one leg on hisneck. A reward for