See also…

Until now, you've been working using the PointCloud that is generated by the 3D sensor, which is incorporated in the fetch robot. But, we've previously seen that this is not the only way that we can add perception to MoveIt.

In the following experiment, we will see how we can also add perception without using the PointCloud.

We have to make sure that there is an object in front of the robot. If there isn't, we will execute the following command in order to spawn another object right in front of the fetch robot:

$ rosrun gazebo_ros spawn_model -file /home/user/catkin_ws/src/object.urdf -urdf -x 1 -model my_object  

Next, we have to modify the configuration files in order to use the DepthImageUpdater plugin, instead of the one we are currently using. As an example, we have to look at the following configuration file:

sensors:
- sensor_plugin: occupancy_map_monitor/DepthImageOctomapUpdater
image_topic: /head_camera/depth_registered/image_raw
queue_size: 5
near_clipping_plane_distance: 0.3
far_clipping_plane_distance: 5.0
skip_vertical_pixels: 1
skip_horizontal_pixels: 1
shadow_threshold: 0.2
padding_scale: 4.0
padding_offset: 0.03
filtered_cloud_topic: output_cloud

Finally, we have to launch the whole environment again and plan a trajectory to check if it is detecting the environment correctly.

And that's it! We have finished this section! I really hope that you have enjoyed it and, most of all, have learned a lot! In the next section, we are going to learn grasping.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.154.151