11robots everywhere
are called humanoids. e technical argument for designing robots
to look like people is that such robots may well operate in human
environments that are optimized for human use. One assumes that
appearance is important for interaction between humans and robots.
To investigate this, Japanese robot scientist Ishiguro builds human-
oid robots as lifelike as possible (Minato, Shimada, Itakura, Lee, &
Ishiguro, 2006) (see Figure 1.3). Humanoid robots that are built to
aesthetically resemble humans are called androids.
e “uncanny valley” theory of Japanese robotic scientist Masahiro
Mori (1970) has played an important role since the 1970s in thinking
about the interaction between robots and humans (see Figure 1.4).
Mechanoids elicit little emotional reaction in people. But themore a
robot looks like a person or an animal, the more positive and empa-
thetic feelings it will evoke in people. If robots resemble people very
strongly, but their behavior is not human enough, then Mori predicts
a strong sense of unease. In this case, the appearance is human-
like, but there is very little familiarity. is is what Mori calls the
uncanny valley. Mori recommends avoiding this valley by building
Figure 1.3 The android robot built on his own image of robotic scientist Ishiguro. (Photo courtesy
of Rinie van Est.)
12 Just ordinAry robots
robots that do not resemble people or animals too much, but are still
human-like or animal-like in behavior. Paro is a good example of this.
Paro is a well-known pet robot that looks like a baby seal. Initially,
the Japanese engineers wanted to develop a cat robot. Test subjects
reacted negatively to it, probably because a “cat” elicits a clear expec-
tation that the robot cat did not deliver. With the seal robot, one did
not come up against the uncanny valley eect. A second way to avoid
the uncanny valley is to build robots that are so similar to humans (or
animals) in appearance and behavior that they are indistinguishable.
1.2.3.2 Opportunities for Physical Activity Possibilities for physi-
cal activity are often determined by the robots shape or physical
body. We saw that the industrial robot is often a nonmobile robot.
In addition, there are numerous mobile robots. Consider moving
robots, robotic craft, and ying robots, such as unmanned drones
(see Chapter 6) deployed by the U.S. military in the wars in Iraq
and Afghanistan. Another example concerns humanoid robots, such
as Honda’s Asimo (see Figure 1.5) and Toyota’s Partner Robot that
Moving
Still
Humanoid robot
Stuffed animal
Industrial robot
Familiarity
Human likeness 50% 100%
Prosthetic hand
Corpse
Zombie
Bunraku puppet
Uncanny valley
Healthy
person
+
Figure 1.4 Mori’s classic illustration of the “uncanny valley.” (From MacDorman, K.F. and
Ishiguro,H., Interact. Studies, 7, 297, 2006.)
13robots everywhere
can walk at 67km/h (or 3–4 mph). Or animal robots such as the
four-legged BigDog, created in 2005 by Boston Dynamics in col-
laboration with the National Aeronautics and Space Administration
(NASA) and Harvard University. In addition to moving, there are
many other physical actions that robots can perform. e RIBA II
care robot, developed by the Japanese RIKEN research institute, can
lift patients weighing up to 80 pounds from the oor into a bed or a
wheelchair. One important technical challenge concerns the energy
source of mobile robots. e Roomba is a vacuum cleaner robot that
goes looking for its own recharger when its battery begins to get low.
In the United States, Robotic Technology Inc. and Cyclone Power
Technologies Inc. developed the EATR (Energetically Autonomous
Robot), which can look for food (biomass) on its own, and from this
can create biofuel for its own energy needs.
1.2.3.3 Articial Senses People have ve senses: ears to listen with,
eyes to see with, skin to feel with, a nose to smell with, and a tongue to
taste with. Robots can also be tted with all kinds of articial senses,
or rather sensors. ink of electronic noses and taste sensors. Cameras
with light sensors are used for facial and emotion recognition. e
perception of robots can outperform human perception by a long way.
Figure 1.5 Humanoid robot Asimo. (Photo courtesy of Bart van Overbeeke.)
14 Just ordinAry robots
Someunmanned military aircraft, so-called drones, use infrared cam-
eras for observation at night and use radar to be able to look through
clouds. Researchers want to improve surgical robots by applying touch
sensors. In this case, the robot communicates about the surgical pro-
cedure the surgeon is performing by exerting force on his hands. One
speaks of haptic feedback or haptic perception (i.e., perception through
the sense of touch).
1.2.4 e Robot Brain
Articial Intelligence is the science of making machines do things that
would require intelligence if done by men.
Minsky (1968, p. v)
A robot is an IT-containing computer hardware and software. e
robot contains no human intelligence, but AI. is AI determines the
behavioral repertoire of the robot and its cognitive, social, and moral
capabilities (hle, Coenen, Decker, & Rader, 2011). e assumption
is that emotional intelligence, social behavior, and dynamic interac-
tion with the environment are prerequisites for the individual and
social behavior of robots in complex social practices.
1.2.4.1 Strong and Weak Articial Intelligence In the 1950s, the idea
arose that all forms of intelligence and learning could be so pre-
cisely described that a machine would be able to mimic them. Some
thought that human intelligence could be completely understood
with the help of computers and that it is possible to make machines
that act like humans and can think, reason, play chess, and show
emotions. is attitude is called the strong AI thesis. In this vision,
machines are ultimately smarter and morally more sensitive than
humans. Supporters of the weak AI synthesis see computers as a tool
in the study of the mind. ey expect that machines can perform
specic “intelligent” tasks to assist human users. Although the weak
AI concept has the most followers, the strong AI vision receives the
most media attention, partly because it has some very outspoken
advocates, such as Marvin Minsky (1968), Hans Moravec (1988),
and Ray Kurzweil (1990, 2005).
15robots everywhere
Minsky has been one of the main advocates of AI from its begin-
ning (Noble, 1997). He was already at Dartmouth College in 1956
when the rst meeting in the AI eld took place. Minsky suggested
it is possible to build intelligent machines, because brains themselves
are machines. According to Minsky, steps in that direction have been
taken by machines that are able to look up information, recognize
patterns, have expert knowledge, and prove mathematical theorems.
He also thought about the advent of robotics. Minsky foresaw a fusion
between man and machine in the distant future. According to him,
thinking machines represent the next step in evolution. e machina
sapiens is a new species that will eventually surpass Homo sapiens. AI
was, therefore, seen as the ultimate turning point in human evolution.
e main contemporary spokesperson on this theme is Raymond
Kurzweil. He is a pioneer in the eld of speech recognition, and the
inventor, in 1976, of a device that turned text into voice for the blind
reader. In his book and movie e Singularity Is Near (2005), he sug-
gests that science and technology are developing exponentially. is
will inevitably lead, he believes, to a point at which AI will surpass
human intelligence. Vernor Vinge (1993) calls that moment “singu-
larity.” Kurzweil thinks that we will achieve this technical and cul-
tural turning point before the middle of this century.
1.2.4.2 Predictions from the Past Let us return to predictions from
the 1950s and 1960s. Norbert Wiener believed that computers would
come to play an important role in the production process, and spoke
of a forthcoming second Industrial Revolution (Umpleby, 2008). But
in addition to the use of AI for industrial tasks, all sorts of creative
and social tasks were foreseen for AI. Alan Turing thought that com-
puters would be able to communicate with people, and invented the
so-called Turing test. In it, a person sends questions to both another
person and a computer located in another room. On the basis of their
replies, the interrogator must determine whether he or she is com-
municating with a human or a machine. Turing predicted that in
50years (thus around now), computers would master this question-
and-answer game so well that the questioner would have a less than
70% chance of distinguishing, within 5minutes, the computer from
the person. Marvin Minsky stated in 1958: “Our mind-engineering
skills could grow to the point of enabling us to construct articial
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.14.150