CHAPTER 18


Conclusion


Although the first industrial robot was installed over 50 years ago, progress in the field has been slow. In large part, this is because isolated efforts were built as unique systems, and little of this work was reusable in subsequent systems. Given the difficulty of building a complex, intelligent system, individual efforts struggled to surpass the achievements of earlier projects. With the advent of ROS and its acceptance by roboticists, current robotic systems are developed much more rapidly than in the past, enabling more attention on pushing back frontiers of robot competence. With ROS’s communications infrastructure, separate but integrated nodes can be run concurrently, and these nodes can be distributed easily across multiple computers. These nodes can be contributions from collaborators world-wide. Specific algorithms shown to have world-leading performance can be adopted and incorporated in ROS-compatible systems easily and rapidly. Further, ROS leverages capabilities of independent open-source projects, including OpenCV, the Point Cloud Library, the Eigen library, and Gazebo (with underlying open-source physics engines).

This text has reviewed ROS in a structured presentation, starting with its foundations in communications. The concept of communication among nodes was covered in Section I, including the paradigms of publish and subscribe, services and clients, action servers and action clients and the parameter server. Tools to assist development and debugging include rosrun, roslaunch, rosbag, rqtplot, rqtreconfigure and rqtconsole. These tools, together with the underlying communications, help developers create robotic systems faster than ever before.

Section II introduced simulation and visualization in ROS. A unified robot description format supports ease of robot modeling, including kinematics, dynamics, visualization and physical interaction (contacts and collisions). A key component of ROS’s simulation capabilities is the ability to simulate the physics of sensors, including torque sensors, accelerometers, LIDAR, cameras, stereo cameras and depth cameras. Further, the open source of Gazebo simulation and rviz visualization supports additional plug-ins to extend these capabilities. rviz supports robot system development by providing visualization of robot models together with their sensory values (e.g. point-cloud displays) and visualization of plans (e.g. navigation plans within cost maps). In addition to providing sensory display, rviz can be used as a human interface. An operator can interact directly with displayed data, providing the robotic system with focus of attention or context. User-definable markers can help display the robot’s “thoughts,” which can help with debugging or validation of plans in a supervisory-controlled system. Interactive markers can also be constructed, allowing an operator to specify 6-D poses of interest.

Section III surveyed sensory interpretation, including camera calibration, use of OpenCV, stereo imaging, 3-D LIDAR sensing and depth cameras. Use of sensors is necessary for mapping, navigation, collision avoidance, object sensing and localization, manipulation and error detection and recovery. Perceptual processing is a large field, and the present introduction is not intended to teach machine vision or point-cloud processing in general. However, it is an asset of ROS that it integrates smoothly with perception, including bridges to OpenCV and PCL. It is also key that ROS provides extensive support for coordinate transformations. Coordinate transforms are essential for calibrating sensors for navigation, collision avoidance and hand–eye coordination. ROS’s tf package and tfListener are used to integrate data for display in rviz, as well as support navigation and robot arm kinematics. An object-finder action server is described, in which specific, modeled objects can be recognized and localized. This example package is limited in its capabilities, but it illustrates functionality for more general perception packages.

Section IV reviewed ROS’s support of mobile robots. The navigation stack is one of the popular successes of ROS. It integrates map-making, localization, global planning, local planning and steering. Nav-stack modules that work together include some of the best algorithms from researchers around the world. At the same time, improvements can be incorporated, for the field as a whole or for targeted, specific systems.

Section V presented ROS in the context of robot arm planning and control. At the joint-space level, the concept of the trajectory message unifies robot interfacing across a wide variety of common and novel robots, and the parallel ROS Industrial effort extends this to a growing base of industrial robots. The joint-level interface supports teach and playback–a common industrial programming approach. Perception-based manipulation, however, requires online kinematic planning. While the fields of robot kinematics (forward and reverse) and kinematic planning are too broad to be covered in this text, it is shown how kinematic libraries and kinematic planning can be implemented in ROS. Specific examples are provided using a realistic robot simulation of the Baxter robot. An object-grabber action server was introduced, which performs kinematic planning and execution in the context of manipulation of objects with known, desirable grasp transforms. It was shown how such functionality can be constructed to abstract manipulation programming at the task level. With such abstraction, manipulation programs can be re-usable across different types of robots with different types of tooling or grippers.

In Section VI, the focus was on system integration. Robot and sensor modeling from Section II was applied to integrate a dual-arm robot on a mobile base, together with LIDAR sensing for navigation and depth sensing for manipulation. The object finder of Section III, the nav-stack implementation from Section IV and the object-grabber package of Section V were integrated within a fetch-and-stack example. Using these capabilities, the robot was able to perceive an object, plan and execute grasp of the object, navigate to a drop-off destination, perceive a drop-off location, and place the grasped object accordingly. This demonstration is indicative of emerging applications in automated storage and retrieval, industrial kitting operations, logistics for filling customer orders, and future applications in domestic robotics.

While this presentation has surveyed wide-ranging aspects of robotics, it has barely scratched the surface of possibilities. ROS has thousands of open-source packages, allowing developers to build novel systems rapidly and incorporate specific expertise without requiring the developer to be an expert in each aspect. Additional capabilities and upgrades continue to emerge from contributors world-wide. Higher-level controls, including state machines and decision trees for more abstract planning, have not been addressed here. Sophisticated object recognition, grasp planning, environment modeling and kinematic planning in 3-D are available in ROS. The MoveIt! environment, which consolidates many of these components, has not been described. Nonetheless, it is the author’s hope that this text will prepare the reader to be a more effective learner. The online tutorials on ROS offer extensive details not covered here, but this presentation should enable the reader to learn more effectively from these tutorials. Further, by building and dissecting the code accompanying this text, it is hoped that the reader will be more capable of adopting existing ROS packages, using and modifying them as necessary and contributing new packages and tools that will further help advance the field of robotics.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.193.129