Robots are familiar to all of us. From television commercials about robotic dogs to the nightly news about space exploration to assembly lines producing beer, cars, or widgets, robots are a part of modern society. Robotics—the study of robots—breaks down into two main categories: fixed robots and mobile robots. Fixed robots are what you see on assembly lines. The machines stay put and the products move. Because the world of a fixed robot is circumscribed, its tasks can be built into the hardware. Thus fixed robots belong mostly in the area of industrial engineering. Mobile robots, by contrast, move about and must interact with their environment. Modeling the world of the mobile robot requires the techniques of artificial intelligence.
Mobile robotics is the study of robots that move relative to their environment, while exhibiting a degree of autonomy. The original approach to modeling the world surrounding a mobile robot made use of plans. Planning systems are large software systems that, given a goal, a starting position, and an ending situation, can generate a finite set of actions (a plan) that, if followed (usually by a human), brings about the desired ending situation. These planning systems solve general problems by incorporating large amounts of domain knowledge. In the case of a mobile robot, the domain knowledge is the input from the robot’s sensors. In this approach, the world of the robot is represented in a complex semantic net in which the sensors on the robot capture the data used to build up the net. Populating the net is time consuming even for simple sensors; if the sensor is a camera, the process is very time consuming. This approach is called the sense–plan–act (SPA) paradigm2 and is shown in FIGURE 13.9.
The sensor data are interpreted by the world model, which in turn generates a plan of action. The robot’s control system (the hardware) executes the steps in the plan. Once the robot moves, its sensors get new data, and the cycle repeats with the new data being incorporated into the semantic net. Problems occur when the new sensory data cannot be processed fast enough to be used. (Perhaps the robot falls into a hole before the world model recognizes that the change in light is a hole rather than a shadow.) The flaw in this approach is that the representation of the robot’s world as domain knowledge in a general system is too general, too broad, and not tailored to the robot’s task.
In 1986, a paradigm shift occurred within the robotics community with Brooks’s introduction of subsumption architecture.3 Rather than trying to model the entire world all the time, the robot is given a simple set of behaviors, each of which is associated with the part of the world necessary for that behavior. The behaviors run in parallel unless they come in conflict, in which case an ordering of the goals of the behaviors determines which behavior should be ex ecuted next. The idea that the goals of behaviors can be ordered, or that the goal of one behavior can be subsumed by another, led to the name of the architecture.
In the model shown in FIGURE 13.10, Keep going to the left (or right) takes precedence over Avoid obstacles unless an object gets too close, in which case the Avoid obstacles behavior takes precedence. As a result of this approach, robots were built that could wander around a room for hours without running into objects or into moving people.
The three laws of robotics defined by Isaac Asimov fit neatly into this subsumption architecture.5 See FIGURE 13.11.
Another shift in robotics moved away from viewing the world as a uniform grid with each cell representing the same amount of real space and toward viewing the world as a topological map. Topological maps view space as a graph of places connected by arcs, giving the notion of proximity and order but not of distance. The robot navigates from place to place locally, which minimizes errors. Also, topological maps can be represented in memory much more efficiently than can uniform grids.
In the 1990s, a modified approach called hybrid deliberate/reactive, in which plans were used in conjunction with a set of behaviors with distributed worldviews, became popular.
We have been discussing the various approaches to try to get a robot to exhibit humanlike behavior and have ignored the physical components of a robot. A robot is made up of sensors, actuators, and computational elements (a microprocessor). The sensors take in data about the surroundings, the actuators move the robot, and the computational elements send instructions to the actuators. Sensors are transducers that convert some physical phenomena into electrical signals that the microprocessor can read as data. Some sensors register the presence, absence, or intensity of light. Near-infrared proximity detectors, motion detectors, and force detectors can all be used as sensors. Cameras and microphones can be sensors. The three most common systems on which robots move are wheels, tracks, and legs.
Artificial intelligence deals with the attempts to model and apply the intelligence of the human mind. The Turing test is one measure to determine whether a machine can think like a human by mimicking human conversation.
The discipline of AI has numerous facets. Underlying all of them is the need to represent knowledge in a form that can be processed efficiently. A semantic network is a graphical representation that captures the relationships among objects in the real world. Questions can be answered based on an analysis of the network graph. Search trees are a valuable way to represent the knowledge of adversarial moves, such as in a competitive game. For complicated games like chess, search trees are enormous, so we still have to come up with strategies for efficient analysis of these structures.
An expert system embodies the knowledge of a human expert. It uses a set of rules to define the conditions under which certain conclusions can be drawn. It is useful in many types of decision-making processes, such as medical diagnosis.
Artificial neural networks mimic the processing of the neural networks of the human brain. An artificial neuron produces an output signal based on multiple input signals and the importance we assign to those signals via a weighting system. This mirrors the activity of the human neuron, in which synapses temper the input signals from one neuron to the next.
Natural-language processing deals with languages that humans use to communicate, such as English. Synthesizing a spoken voice can be accomplished by mimicking the phonemes of human speech or by replying with prerecorded words. Voice recognition is best accomplished when the spoken words are disjointed, and is even more effective when the system is trained to recognize a particular person’s voiceprint. Comprehending natural language—that is, applying an interpretation to the conversational discourse—lies at the heart of natural-language processing. It is complicated by various types of ambiguities that allow one specific sentence to be interpreted in multiple ways.
Robotics, the study of robots, focuses on two categories: fixed robots and mobile robots. Fixed robots stay put and have whatever they are working on come to them. Mobile robots are capable of moving and require the techniques of artificial intelligence to model the environment in which they navigate.
For Exercises 1–5, match the type of ambiguity with an example.
Lexical
Referential
Syntactic
For Exercises 6–21, mark the answers true or false as follows:
True
False
For Exercises 22–30, match the task with who can solve it most easily.
Computer
Human
Exercises 31–76 are problems or short-answer questions.
38. Which data structure defined in Chapter 8 is used to represent a semantic network?
3.147.104.248