Recognizing objects

There are several commands to start recognition using a trained model.

Starting roscore:

    $ roscore

Starting the ROS driver for Kinect:

    $ roslaunch openni_launch openni.launch

Setting the ROS parameters for the Kinect driver:

    $ rosrun dynamic_reconfigure dynparam set /camera/driver depth_registration True
    $ rosrun dynamic_reconfigure dynparam set /camera/driver image_mode 2
    $ rosrun dynamic_reconfigure dynparam set /camera/driver depth_mode 2

Republishing the depth and RGB image topics using topic_tools relay:

    $ rosrun topic_tools relay /camera/depth_registered/image_raw /camera/depth/image_raw
    $ rosrun topic_tools relay /camera/rgb/image_rect_color /camera/rgb/image_raw

Here is the command to start recognition; we can use different pipelines to perform detection. The following command uses the tod pipeline. This will work well for textured objects.

    $ rosrun object_recognition_core detection -c `rospack find object_recognition_tod`/conf/detection.ros.ork --visualize

Alternatively, we can use the tabletop pipeline, which can detect objects placed on a flat surface, such as a table itself:

    $ rosrun object_recognition_core detection -c  `rospack find object_recognition_tabletop`/conf/detection.object.ros.ork

You could also use the linemod pipeline, which is the best for rigid object recognition:

    $ rosrun object_recognition_core detection -c  `rospack find object_recognition_linemod`/conf/detection.object.ros.ork

After running the detectors, we can visualize the detections in Rviz. Let's start Rviz and load the proper display type, shown in the screenshot:

    $ rosrun rviz rviz
Figure 23: Object detection visualized in Rviz

The Fixed Frame can be set to camera_rgb_frame. Then, we have to add a PointCloud2 display with the /camera/depth_registered/points topic. To detect the object and display its name, you have to add a new display type called OrkObject, which is installed along with the object-recognition package. You can see the object being detected, as shown in the previous screenshot.

If it is a tabletop pipeline, it will mark the plane area in which object is placed, as shown in the next screenshot. This pipeline is good for grasping objects from a table, which can work well with the ROS MoveIt! package.

Figure 24: Tabletop detection visualized in Rviz

For visualizing, you need to add OrkTable with the /table_array topic and MarkerArray with the /tabletop/clusters topic.

We can add any number of objects to the database; detection accuracy depends on the quality of model, quality of 3D input, and processing power of the PC.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.35.81