Summary

TurtleBot comes with its own 3D vision system that is a low-cost laser scanner. The Kinect, ASUS, PrimeSense, or RealSense devices can be mounted on the TurtleBot base and provide a 3D depth view of the environment. This chapter provided a comparison of these four types of sensors and identified the software that is needed to operate them as ROS components. We checked their operation by testing the sensor on TurtleBot in standalone mode. To use the devices, we can utilize Image Viewer or rviz to view image streams from the rgb or depth cameras.

For TurtleBot 3, the LDS sensor was described and ROS software and camera driver software was identified.

The primary objective is for TurtleBot to see its surroundings and be able to autonomously navigate through them. First, TurtleBot is driven around in teleoperation mode to create a map of the environment. The map provides the room boundaries and obstacles so that TurtleBot's navigation algorithm, amcl, can plan a path through the environment from its start location to a user-defined goal.

Navigation to a designated location is also performed without a map. Additionally, an example of a Python script is used to navigate with the move_base action using a map of the environment.

In the next chapter, we will return to the ROS simulation world and create a robot arm. The development of a URDF for a robotic arm and control of it in simulation will then prepare us to examine the robotic arms of Baxter in Chapter 6, Wobbling Robot Arms Using Joint Control. Using Baxter's robot arms, we will explore the complexities of multiple joint control and the mathematics of kinematic solutions for positioning multiple joints.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.252.201