Configuring TurtleBot and installing the 3D sensor software

There are minor but important environmental variables and software that are needed for the TurtleBot based on your selection of 3D sensors. We have attached a Kinect Xbox 360 sensor to our TurtleBot, but we will provide instructions to configure each of the 3D sensors mentioned in this chapter. These environmental variables are used by the ROS launch files to launch the correct camera drivers. In ROS Kinetic, the Kinect, ASUS, and RealSense sensors are supported by different camera drivers, as described in the following sections.

Kinect

The environmental variables for the Kinect sensors are as follows:

export KINECT_DRIVER=freenect
export TURTLEBOT_3D_SENSOR=kinect

These variables should be added to the ~/.bashrc files of both the TurtleBot and the remote computer. For mapping and navigation a common 3dsensor launch file is utilized and these environment variables identify the 3D vision sensor attached to TurtleBot.

Libfreenect is an open source library that provides an interface for Microsoft Kinect to be used with Linux, Windows, and Mac. ROS packages for Kinect 360 and Kinect One are installed with TurtleBot software installation described in the Setting up to control a real TurtleBot 2 section in Chapter 3, Driving Around with TurtleBot. These ROS packages are:

  • ros-kinetic-libfreenect
  • ros-kinetic-freenect-camera
  • ros-kinetic-freenect-launch
  • ros-kinetic-rgbd-launch

Kinect for Windows v2 requires a different camera driver named libfreenect2 and the iai_kinect2 software toolkit. The installation of this software is described in Chapter 9, Flying a Mission with Crazyflie.

Note

For the latest information on the ROS freenect software, check the ROS wiki at http://wiki.ros.org/freenect_launch. Maintainers of the freenect software utilize as much of the OpenNI2 software as possible to preserve compatibility.

ASUS and PrimeSense

The TurtleBot software for ROS Kinetic is configured to work with the ASUS Xtion PRO as the default configuration. It is possible to add the following environmental variable:

export TURTLEBOT_3D_SENSOR=asus_xtion_pro

although, (at this time) it is not necessary.

The openni2_camera ROS package supports the ASUS Xtion, Xtion PRO, and the PrimeSense 1.08 and 1.09 cameras. The openni2_camera package does not support any Kinect devices. This package provides drivers for the cameras to publish raw rgb, depth, and IR image streams.

ROS packages for OpenNI2 are installed with the TurtleBot software installation described in the Setting up to control a real TurtleBot 2 section in Chapter 3, Driving Around with TurtleBot. These ROS packages are:

  • ros-kinetic-openni2-camera
  • ros-kinetic-openni2-launch

Note

For the latest information on the ROS OpenNI2 software, check the ROS wiki at http://wiki.ros.org/openni2_launch.

Intel RealSense

The environmental variable for one of the Intel RealSense cameras is as follows:

export TURTLEBOT_3D_SENSOR=<R200, F200, SR300, ZR300>

Only one of the camera identifiers within the brackets should be used. This variable can be added to the ~/.bashrc files of both TurtleBot and the remote computer.

The RealSense ROS packages enable the use of Intel's RealSense cameras with ROS. Librealsense is the underlying library of drivers for communicating with all the cameras. The ROS package realsense_camera is the software for the camera node that publishes the image data. These packages are installed with the TurtleBot software installation described in the Setting up to control a real TurtleBot 2 section in Chapter 3, Driving Around with TurtleBot. For installing these packages on the TurtleBot 3 Waffle SBC, use the following commands:

$ sudo apt-get install ros-kinetic-librealsense
$ sudo apt-get install ros-kinetic-realsense-camera

Camera software structure

The freenect_camera, openni2_camera and realsense_camera packages are ROS nodelet packages used to streamline the processing of an enormous quantity of image data. Initially, a nodelet manager is launched and then nodelets are added to the manager. The default 3D sensor data type for the camera nodelet processing is depth_image. The camera driver software publishes the depth_image message streams. These messages can be converted to point cloud data types to make them more usable for Point Cloud Library (PCL) algorithms. Basic navigation operations on TurtleBot use depth_images for faster processing. Launching nodelets to handle the conversion of raw depth, rgb, and IR data streams to the depth_image, disparity_image, and registered_point_cloud messages is the method of handling all the conversions in one process. Nodelets allow multiple algorithms to be running in a single process without creating multiple copies of the data when messages are passed between processes.

The depthimage_to_laserscan package uses the depth_image data to create sensor_msgs/LaserScan in order to utilize more processing power to generate maps. For more complex applications, converting depth_images to the point cloud format offers the advantage of using the PCL algorithms.

Defining terms

The important terms that are used in configuring TurtleBot are as follows:

  • Depth cloud: Depth cloud is another name for the depth_image produced by a 3D sensor, such as the Kinect, ASUS, PrimeSense, and RealSense depth cameras.
  • Point cloud: A point cloud is a set of points with x, y, and z coordinates that represent the surface of an object.
  • Registered DepthCloud and Registered PointCloud: These terms are used by ROS for special DepthCloud or PointCloud data colored by the rgb image data. These data streams are available when the depth_registration option is selected (set to true).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.128.145