image PREFACE

We humans are good at moving around in this world of ours. If we are serious about the ubiquity of robots' help to humankind, we must pass this skill to our robots. It also turns out that in some tasks, robots can find their way better than humans. This suggests that it is time for humans and robots to join forces.

Imagine you arrive at a party. You are a bit late. The big room is teeming with voices and movement. People talk, drink, dance, walk. As you look around, you notice a friend waving to you from the opposite side of the room. You fill two glasses with wine, glance quickly across the room, and start on your journey. You maneuver between people, bend your body this way and that way to avoid collision or when shoved from the side, you raise your hands and squeeze your shoulders, you step over objects on the floor. A scientifically minded observer would say that you react to minute disruptions on your path while also keeping in mind your global goal; that you probably make dozens of decisions per second, and a great many sensors are likely involved in this process; that you react not only to what you see, but also to what you sense at your sides, your back, your feet. In a minute's time you happily greet your friend and hand him a glass of wine.

You may be surprised to hear that in your trip across the room you planned and executed a complex motion planning strategy whose emulation in technology is a yet unachieved dream of scientists and engineers. Providing a robot with a seemingly modest skill that you just demonstrated, an ability to move safely among surrounding objects using incomplete sensing information about them would be a breakthrough in science and technology whose consequences for society is hard to overestimate. This would be the beginning of a new era, with a great number of machines of unimaginable variety moving quietly and productively in the world around us.

The main reason that we desire such technology is not, of course, the convenience of a wine-serving automatic maid. A machine's ability to safely operate in a reasonably arbitrary environment will lead to our automating a wide span of tasks that have eluded automation so far—from the delivery of drugs and food to patients in hospitals and nursing homes, to a robot “nurse” in the homes of elderly people, and to such indispensable tasks as cleaning chemical and nuclear waste sites, demining of old and new mine fields, planetary exploration, repair of faraway space satellites, and a great number of other tasks in agriculture, undersea, deep space, and so on. Equipped with this skill, the recent Mars rovers Spirit and Opportunity would have accomplished in hours what took them weeks.

We do not have such automation today. Today, humans are not even allowed to share space with serious robots, though a good number of the tasks above would require this. The only reason for this constraint is that today's robot bodies are too insensitive, too oblivious to their surroundings, and hence too dangerous to themselves and to objects and people around them.

Looking ahead to the near future, however, there are at least three good reasons for optimism. One is social: The problem will not go away and so the pressure on scientists and engineers will stay strong. The need for machines capable of working in our midst or far away with little or no supervision will only grow with time. The value of human life and the increasing costs of human labor combined with ever riskier undertakings in space, undersea, and in rough places on Earth will continue the push for more automation. A very good example of this trend is the recent unique “attempt for on attempt” for a robot mission to save the ailing Hubble Telescope.

One may say that having a painful problem is not enough to find a solution. True, but then there are the other two reasons. The second reason for optimism is the successes of robot systems in recent years. Almost 1,000,000 highly reliable industrial robots are doing useful, sometimes quite complex, work worldwide. True, almost none of these robots can operate outside of their highly specialized man-made environment, and those few that do are too simplistic to be taken seriously. Hence the third reason for optimism: Research laboratories around the world report more and more sophistication in robot systems operating outside the “sanitized” factory environment. Robots have been shown to be as good as or better than humans in some tasks that require spatial reasoning and motion planning. Systems have been demonstrated where synergistic human-robot teams operate better, even smarter, than each of them separately. This trend is bound to continue.

It is the ability to plan its own motion that makes a robot qualitatively different from other machines. After all, the mechanical parts, electronics, computers, some functional abilities, and sophistication that robots possess are present in many other digitally controlled machines. Thus the half-humorous debates of the 1960s and 1970s when designers of digitally controlled factory machinery were accusing specialists in robotics of inflating the prestige of their field by calling their machines robots—aren't these just slightly modified digitally controlled machines? There is truth to it. Now we are approaching a time when the field of robotics will be able to say that it is the ability to plan its own motion that makes a robot a robot.

Doesn't such technology already exist? Haven't we read about robots that paint and weld and do assembly in automotive and computer manufacturing factories? For factories, yes, but for tasks outside the factory floor—hospitals and outer space and mine fields—no, not really, except perhaps in a few simplistic cases. What is the difference?

For you and me, the success of, say, returning a bottle to the refrigerator depends little on whether at this very instant the arrangement of objects in the refrigerator differs from what it was half hour ago when the bottle was taken out. This is not so for today's robots.

If the required motion is to be repeated over and over again and if all the objects in the robot workspace can be described precisely—as they are, for example, on the car assembly line or in an automatic painting booth—using robots to automate the task presents no principal difficulties today. Designing the required trajectories for the tool in the robot hand is a purely geometric problem, fully solvable by computer. (Depending on the task specifics, it may of course require an unrealistically large amount of computation time, but this is another matter.) Once the car model changes next year, the new data are fed into the computer, and the required motion is recalculated. This is an example of a structured task, and it takes place in a structured environment. The word “structured” is roughly equivalent to “well-organized,” “known precisely,” “manmade.” Objects in a structured environment can be safely assumed fully known in space and time.

As a rule, a structured environment is designed, carefully and often at great cost, by highly qualified professionals. From the standpoint of motion planning, the input information that the robot needs in order to generate the desired motion is available before the motion starts. What is needed is appropriate algorithms for transforming this information into proper motion trajectories. Today there are plenty of such algorithms. This setup represents the Intelligence–Motion planning paradigm.

This algorithmic paradigm was formulated right at the beginning of robotics as a field of science and technology, around the mid-1960s. Today the Intelligence-Motion paradigm boasts a large literature, appearing under such names as motion planning with complete information, or model-based motion planning, or the Piano Mover's model. The symbolism behind the latter term is that when movers set out to move a piano, they can first sit down and figure out the whole sequence of moves and turns and raisings and lowerings, before they start the actual motion. After all, the physical setting that encompasses this information is right there before them. (Except, one might comment, “Who in this world would ever do it this way?” More likely the movers just say, “Let's do it!”, and they discuss every move as they get to it—thereby losing an opportunity to contribute to a great theory.)

On the theoretical level, the problem of motion planning with complete information is more or less closed: remarkably complete and enlightening studies of the problem have provided computational complexity bounds, motion planning algorithms, and deep insights into the problem. Which is not to say that all problems in this area are solved. Most of today's work in this area is devoted to special cases and to struggling with computational issues in realistic settings. Somewhat ironically, applications where such techniques are used today relate not so much to robotics as to other areas: computer-aided design (CAD, e.g., to design an aircraft engine such as to allow quick removal or replacement of a given unit), models of protein folding in biology, and a few others. The major property of such tasks is that the required motion is designed in a database rather than in a physical setting. Given the wealth of published work in this area, this book reviews the Piano Mover's paradigm only cursorily.

The focus of this book is on unstructured tasks—tasks that unfold in an unstructured environment, an environment that is not predesigned and has to be taken as is. Most of the motion planning examples above (homes, outdoors, deep space, etc.) refer to unstructured tasks. Until recently, robotics practitioners have either ignored this area or have limited their efforts to grossly simplified tasks with robot hands or with mobile robots. Even in the latter cases the operation is mostly limited to a tight human teleoperation, with a minimum of robot autonomy (as in the case of recent Mars rovers). All kinds of helpful “artificial” measures—for example, an extremely slow operation—are taken to allow the operator to precede commands with a careful analysis.

Automating motion planning for mobile robots will be considered in the first sections of this text. We will also see later that teaching a robot arm manipulator to safely move in an unstructured environment is a much taller order than the same request for a mobile robot. This is unfortunate because a large number of pressing applications require manipulators. Today people use a great deal more arm manipulators than mobile robot vehicles. An arm manipulator is a device similar to a human arm. If the task is to just move around and sense data or take pictures, that is a job for a mobile robot. But if the task requires “doing things”—welding, painting, putting things together or taking them apart—one needs an arm manipulator. Interestingly, while collision avoidance is a major bottleneck in the use of robot manipulators, there is minuscule literature on the subject. This book attempts to fill the gap.

Objects in an unstructured robot workspace cannot be described fully—either because of their unyielding shape, or because of lack of knowledge about them, or because one doesn't know which object is going to be where and when, or because of all three. In dealing with an environment that has to be taken as is, our robots have a good example to follow: The evolution has taught us humans how to move around in our messy unstructured world. We want our robots to leap-frog this process.

And then there are tasks—especially, as we will see, with motion planning for arm manipulators—where human skills and intuition are not as enviable. In fact, not enviable at all. Then not only do we need to enter unchartered territories and synthesize new robot motion planning strategies that are way beyond human spatial reasoning skills, but also we must built a solid theoretical foundation behind them, because human experience and heuristics cannot help ascertain their validity.

If the input information about one's surroundings is not available beforehand, one cannot of course calculate the whole motion at once, or even in large pieces. What do we humans and animals do in such cases? We compensate by real-time sensing and sensor data processing: We look, touch, listen, smell, and continuously use the sensing information to plan, execute, and replan our motion. Even when one thinks one knows by heart how to move from point A to point B—say, to drive from home to one's office—the actual execution still involves a large amount of continuous sensor-based motion planning.

Hence the names of approaches to motion planning in an unstructured environment that one finds in the literature are: motion planning with incomplete information, or sensor-based motion planning. Another good name comes from the crucial role that this paradigm assigns to sensing: Similar to the phrase Intelligence-Motion for motion planning with complete information, we will use the name Sensing-Intelligence-Motion (SIM) for motion planning with incomplete information. The SIM approach will help open the door for robotics into automation of unstructured tasks. (Recall “Open door, Simsim!” in the Arabian tale “Ali Baba and the Forty Thieves.”)

The described differences in how input information appears in the Piano Mover's and SIM paradigms affect their approach to motion planning in crucial ways—so much so that attempted symbiosis of some useful features of “structured” and “unstructured” approaches have been so far of little theoretical interest and little practical use.

While techniques for motion planning with complete information started in earnest in the first years of robotics, sometime in early 1960s, the work on SIM approaches started later, in the late 1980s, and has proceeded more slowly. The slow pace is partly due to the fact that the field of robotics in general and the area of motion planning in particular have been initiated primarily by computer scientists. The combinatoric-computational professional inclinations of these visionaries made them more enthusiastic about geometric and computational issues in robotics than about real-time control and the algorithmic role of sensing. Another important reason is the tight connection between algorithms and hardware that the SIM approach espouses. As we will see later, some of this (sensing) hardware has only started appearing recently. Finally, a quick look at this book's table of contents will show that the work on SIM approaches requires from its practitioners a somewhat unusual combination of background: topology, computational complexity, control theory, and a rather strange sensing hardware.

Whatever the reasons, in spite of its great theoretical interest and an immense practical potential, the literature on the sensor-based motion planning paradigm is small, especially for arm manipulators. In fact, today there are no textbooks devoted to it.

Our goals in this book are as follows:

(a) Formulate the problem of sensor-based motion planning. We want to explore why the relevant issues are so hard—so much so that in spite of hard work and some glorious successes of robotics, there is no robot today that can be left to its own devices, without supervision, outdoors or in one's home. Build a theoretical foundation for sensor-based motion planning strategies.

(b) Study in depth a variety of particular algorithmic strategies for mobile robots and robot arm manipulators, and try to identify promising directions for conquering the general problem.

(c) Given the similarity of underlying tasks and requirements, compare robot performance and human performance in sensor-based motion planning. The hope is that by doing so we can get a better insight into the nature of the problem, and can help build synergistic human–robot teams for tele-operation tasks.

(d) Review sensing hardware that is necessary to realize the SIM paradigm.

The book is intended to serve three purposes: (1) as a course textbook; (2) as a research text covering in depth one particular area of robotics; (3) as a program of research and development in robotic automation of unstructured tasks.

As a Textbook. A good portion of this book grew out of graduate and senior undergraduate courses on robot motion planning taught by the author at Yale University and the University of Wisconsin—Madison. As often happens with research-oriented courses, the course kept changing as more research material appeared and our knowledge of the subject expanded.

The text assumes a basic college background in mathematics and computer science. A prior introductory course in robotics and some knowledge in topology will be helpful but are not required. Some more exposure to topology is advised for mastering the analysis that appears in Section 5.8 (Chapter 5) and the first two pages of Section 6.2.4 (Chapter 6). Conclusions from this analysis, in particular the formulation of algorithms, are written at the level compatible with the rest of the book, though. The instructor is advised to glance through the chapters beforehand to decide which level of what background a given chapter or section requires.

Homework examples are provided as needed. In my view, a good homework structure for an advanced course like this one includes two components: (a) ordinary homework assignments that dig deeper in the student's knowledge, are modest in number, and require a week or two to complete each assignment; and (b) a course project that is initiated in the course's first few weeks, goes in parallel with it, and is defended at the end of the course, with the defense treated as the final exam. The weights of those components in the student final grade can be, say, 50% for the homework, 20% for the midterm assessment of the project, and 30% for the final text-plus-presentation-before-class of the project. A list of ideas for course projects is provided in Chapter 9.

Assuming a conventional two-semester school year, this book has about two semesters worth of material. A one-semester course hence calls for choices. A typical structure that covers ideas and computational schemes of the sensor-based motion planning paradigm will include Chapters 1, 2, 3, 5, and 6 (Motion Planning—Introduction, A Quick Sketch of Major Issues in Robotics, Motion Planning for a Mobile Robot, Motion Planning for Two-Dimensional Arm Manipulators, Motion Planning for Three-Dimensional Arm Manipulators). Let us call this sequence the core course. The sequence contains no control theory or electronics, and it allows for the widest audience in terms of students' majors.

For a strictly engineering class where students have already had courses in controls and electronics, the instructor may want to sharply contract the time for Chapter 2 and provide instead a deeper understanding of the effects of robot dynamics on motion planning, covered in Chapter 4, plus a cursorial review of principles of design of sensing devices necessary for realizing sensor-based motion planning strategies, Chapter 8. Any group can benefit from Chapter 7, which is devoted to human performance in motion planning and spatial reasoning tasks. A two-semester sequence will comfortably cover all those chapters (with the danger of one's noticing some repetitions necessitated by the foreseen different uses of the book).

The decision to include in the course the topics covered in Chapters 4, 7, and 8, as well as the time devoted to the introductory Chapters 1 and 2 will depend much on the mixture of students in class, in particular their prior exposure to robotics, control theory, and electronics. Mandating prior courses on these topics may introduce interesting difficulties. In my experience, a significant percentage of graduate students attracted to this course come from disciplines outside of engineering, computer science, physics, and mathematics—such as business administration, psychology, and even medicine. This is not surprising since the course material touches upon the future of their disciplines rather deeply. Students from some areas, especially the latter three above, are usually interested in ideas and cognitive underpinnings of the subject. These students are often extremely good, quick, and knowledgeable and have a reasonably good background in mathematics. Often such students do well in homework assignments, bring in new ideas, and come up with wonderful course projects in their appropriate areas. Denying their participation would be a pity, in my view—after all, robotics is a wide and widely connected field.

With such students in class, the instructor may choose to spend a bit more time on the introductory sections, in order to bring up to speed students who have had no past exposure to the robotics field. The instructor may also want to complement introductory material with a relevant textbook (some such textbooks are mentioned in Chapters 1 and 2). Students' grades in the homework at the end of Chapter 2 will give the instructor a good indication of how prepared they are for the core course.

As a Research Text. This book is targeted to people who are interested in or are directly involved in research and development of robot and human–robot interaction systems. If one's goal is to understand the underlying issues or design a system capable of purposeful motion in an unstructured environment while protecting the robot's whole body—in streets, homes, undersea, deep space, agriculture, and so on—today SIM is the only consistent approach one can count on. This is not to say that the book contains answers to all questions. It provides some constructive answers, and it calls for continuation.

The book should also be of interest to people working in areas that are tangentially connected to robotics, such as sensor development and design of tele-operated systems. And finally, the book will hopefully appeal to people interested in the wide complex of underlying issues in robotics and human–robot interaction, from mathematical and algorithmic questions to cognitive science to advanced robot applications.

As a Program for Continued Research and Development. To repeat the statement above, today the Sensing-Intelligence-Motion (SIM) approach seems to be the only paradigm that holds promise to bring about robot automation of unstructured tasks. This is not because of some special sophistication of SIM techniques, but simply because only SIM techniques take care of the necessary whole body awareness of the robot and do it “on the fly,” in real time, making it possible to handle a high level of uncertainty. And only this approach guarantees results in this area when human intuition breaks down.

And yet, as one will see later, only a limited number of SIM algorithms and sensing schemes for real-world robot systems have been explored so far. Much of the theory and of algorithmic and hardware machinery that is necessary to bring the SIM approach to full fruition lies ahead of us. The book starts on the misty route that lies ahead and that has to be traversed if we are serious about bringing automation into unstructured tasks. With the risk of being seen less than balanced, I suggest that not many areas of computer science and engineering can compete with the excitement, the required breadth of knowledge, and the potential impact on society of the topics covered in this book.

Professional and commercial importance of robotics aside, robots have been always of immense interest to the general public. Isaac Asimov's robot heros are household names. Crowds invariably surround fake robots (controlled by humans from nearby buildings) on the Disneyland streets. Robot exploits on Mars or on the Space Shuttle or in a minefield disarming operation make front pages of newspapers. What excites laymen is a human-like behavior potential of a robot. This book takes the reader further in this same direction by providing a solid foundation behind one human-like ability of robots that was so far assumed to be an inherent monopoly of humans—namely, the ability to think of and plan one's motion in an unstructured world.

Robots are often referred to derisively: “He moves like a robot,” “Yours is a robot reaction,” “Hey, don't behave like a robot.” What is meant is crude, unintelligent, and mechanical; even the word “mechanical” signifies here crude and unintelligent. Many mimes entertain the crowd on the street corners by moving “like a robot”—that is, switching sharply from one movement to the other and being oblivious to the surroundings.

That is not what robots should be and even are today. Examples in Chapter 8 will show that when equipped with means for self-awareness and with strategies to use it, robots become sensitive to their surroundings, “pensive,” and even gentle in how they “mind” their movement.1 A nonprofessional reader curious about the possibilities of intelligent robots will find long layman-level passages in the Introduction, introductory sections to other chapters, discussions, examples, and simplified explanations of the underlying ideas throughout the text.

Designing a whole-sensitive robot is almost like designing a friend. One day you move your hand in a stroking movement along the robot's skin, and it responds with a gentle appreciative movement. This gives you a strange feeling: We humans are totally unprepared to see a machine exhibit a behavior that we fully expect from a cat or a dog. I hope that both professional and layman readers will share this gratifying feeling. And, of course, I hope the book will further our attempts toward populating our environment with helpful and loyal robot friends.

VLADIMIR J. LUMELSKY

Madison, Wisconsin
Washington D.C.
April 2005

1Sharp “robot-like” movements have been a persistent science fiction-maintained myth. Many robot applications—car painting is a good example—require smooth motion and simply cannot tolerate sharp turns. Today's industrial robots can generate a motion that is so smooth and delicate that it may be the envy of “Swan Lake” ballerinas. For those who know calculus, what dancer can promise, for example, a motion so smooth that both its derivatives have guaranteed continuity!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.59.163