III

Perceptual Processing in ROS

INTRODUCTION

Intelligent robot behavior depends on performing actions appropriately in the context of its environment. For example, a mobile robot should avoid collisions and avoid navigating over impassable or dangerous terrain. Robot manipulators should perceive and interpret objects of interest, including object identification and localization, and should plan collisionfree trajectories for part acquisition and manipulation. By perceiving and interpreting the environment, a robot can locate objects of interest or deduce appropriate actions (e.g. putting dishes in a dishwasher or fetching a specified article from a warehouse), as well as generate viable grasp and manipulation plans. Realizing sensor-based behaviors requires perceptual processing of sensory data.

In general, understanding one’s environment based on sensory data is an enormous challenge encompassing multiple fields. Nonetheless, some useful sensory-driven behaviors are currently practical, and ROS tools exist to to assist in such design. Perceptual processing (e.g. computer vision) has a much longer history than ROS, and it is important that ROS be compatible with existing open-source libraries. Notably, OpenCV and the Point Cloud Library offer powerful tools to interpret sensory data from cameras, stereo cameras, 3-D LIDAR and depth cameras.

The next three chapters will introduce using cameras in ROS, depth imaging and point clouds, and point-cloud processing. It should be appreciated that this introduction is not a substitute for learning image processing in general, nor OpenCV or PCL in particular. A recommended guide for using OpenCV is [4]. Use of the Point-Cloud Library is not, at the time of this writing, presented in textbook style. However, there are on-line tutorials at http://pointclouds.org/.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.39.74