Running the viso2 online demo

At this point, we are ready to run the visual odometry algorithm: our stereo pair cameras are calibrated, their frame has the appropriate name for viso2 (ends with _optical), and tf for the camera and optical frames has been published. However, before using our own stereo pair, we are going to test viso2 with the bag files provided in http://srv.uib.es/public/viso2_ros/sample_bagfiles/; just run bag/viso2_demo/download_amphoras_pool_bag_files.sh to obtain all the bag files (this totals about 4 GB). Then, we have a launch file for both the monocular and stereo odometers in the launch/visual_odometry folder. In order to run the stereo demo, we have a launch file on top that plays the bag files and also allows you to inspect and visualize its contents. For instance, to calibrate the disparity image algorithm, run the following command:

    $ roslaunch chapter5_tutorials viso2_demo.launch config_disparity:=true view:=true  

You will see that the left-hand side, right-hand side, disparity images, and the rqt_reconfigure interface configures the disparity algorithm. You need to perform this tuning because the bag files only have the RAW images. We have found good parameters that are in config/viso2_demo/disparity.yaml. In the following screenshot, you can see the results obtained using them, where you can clearly appreciate the depth of the rocks in the stereo images:

In order to run the stereo odometry and see the result in rqt_rviz, run the
following command:

    $ roslaunch chapter5_tutorials viso2_demo.launch odometry:=true
rviz:=true

Note that we have provided an adequate configuration for rqt_rviz in config/viso2_demo/rviz.rviz, which is automatically loaded by the launch file. The following sequence of images shows different instances of the texturized 3D point cloud and the /odom and /stereo_optical frames that show the camera pose estimate from the stereo odometer. The third image has a decay time of three seconds for the point cloud, so we can see how the points overlay over time. This way, with good images and odometry, we can even see a map drawn in rqt_rviz, but this is quite difficult and generally needs a SLAM algorithm (see chapter 7, Using Sensors and Actuators with ROS). All of this is encapsulated in the following screenshots:

>

As new frames come, the algorithm is able to create a 3D reconstruction, as shown in the following screenshot, where we can see the heights of the rocks on the seabed:

If we set a Decay Time of three seconds, the different point clouds from consecutive frames will be shown together, so we can see a map of the bottom of the sea. Remember that the map will contain some errors because of the drift of the visual odometry algorithm:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.216.160