List of Figures


 

1.1 Node topology as illustrated by rqt_graph

1.2 Output of rqt_console with minimal nodes launched.

1.3 Output of rqt_console with minimal nodes running and rosbag running

1.4 Output of rqt_console with minimal_subscriber and rosbag play of recorded (bagged) data

1.5 Output of rqt_plot with minimal_simulator, minimal_controller and step velocity-command input via console

2.1 Output of rqt_console and action client and action server terminals with timer example.

3.1 Display from STDR launch.

3.2 STDR stalls at collision after executing forward motion command from start position

3.3 STDR after approximate 90-degree counter-clockwise rotation

3.4 STDR stalls again at collision after executing another forward motion command

3.5 STDR final pose after executing programmed speed control

3.6 STDR commanded and actual speed and yaw rate versus time

3.7 Gazebo display of empty world with gravity set to 0

3.8 Gazebo display after loading rectangular prism model

3.9 Gazebo display after loading rectangular prism and cylinder

3.10 Gazebo display of two-link, one-DOF robot URDF model

3.11 Gazebo display of minimal robot with minimal controller

3.12 Transient response of minimal robot with minimal controller

3.13 Gazebo display of minimal robot contacting rigid object

3.14 Contact transient when colliding with table

3.15 Transient response to step position command with ROS PD controller

3.16 Graphical display of URDF tree for mobot

3.17 Gazebo view of mobot in empty world

3.18 Response of mobot to step velocity commands in Gazebo simulation

3.19 Gazebo simulation of mobot with contacts display enabled

3.20 Gazebo display of a mobot in starting pen

3.21 Gazebo display of combined mobile base and minimal arm models

3.22 Gazebo display of Baxter robot model

3.23 Gazebo display of DaVinci robot model

3.24 Gazebo display of Atlas robot model

5.1 An rviz view of Atlas robot model with LIDAR sensor display

5.2 An rviz view of Atlas robot in lab with interpretation of actual LIDAR data

5.3 Physical and simulated Baxter robots. rviz view (b) displays points from simulated Kinect sensor with points colorized by z height

5.4 rviz view of simple mobile robot with one-DOF arm

5.5 Adding marker display in rviz

5.6 Markers displayed in rviz from example_marker_topic

5.7 Markers at height 1.0 after rosservice call

5.8 Screenshot of triad display node with triad_display_test_node

5.9 Adding interactive marker to rviz display

5.10 Display of interactive marker in rviz

5.11 Gazebo view of simple mobile robot with LIDAR sensor in a virtual world

5.12 Rviz view of simple mobile robot with LIDAR sensor data being displayed

5.13 Gazebo view of simple mobile robot in virtual world and display of emulated camera

5.14 Gazebo, rviz and image-view displays of simple mobile robot with LIDAR, camera and Kinect sensor in virtual world

5.15 View of rviz during addition of plug-in tool

5.16 rviz view showing selection of single LIDAR point to be published

5.17 Gazebo view of mobot in simulated gas station, and rviz display of simulated Kinect data. A small, light-blue patch of points on the pump handle displays user-selected points.

6.1 Standard camera frame definition.

6.2 Gazebo simulation of simple camera and display with image viewer.

6.3 Camera calibration tool interacting with Gazebo model.

6.4 Rviz view of world frame and left-camera optical frame in simple stereo camera model.

6.5 Screenshot during stereo camera calibration process, using simulated cameras.

6.6 Result of running find_red_pixels on left-camera view of red block.

6.7 Result of running Canny edge detection on left-camera view of a red block.

7.1 Gazebo and rviz views of LIDAR wobbler–wide view.

7.2 Gazebo and rviz views of LIDAR wobblerzoomed view of sideways can on ground plane.

7.3 Gazebo view of stereo-camera model viewing can.

7.4 Display of can imagesright, left and disparity.

7.5 rviz view of three-dimensional points computed from stereo vision.

7.6 Selection of points in rviz (cyan patch)stereo vision view.

7.7 Selection of points in rviz (cyan patch)Kinect view.

8.1 Rviz view of point cloud generated and published by display_ellipse.

8.2 Rviz view of image read from disk and published by display_pcd_file.

8.3 Rviz view of image read from disk, down-sampled and published by find_plane_pcd_file.

8.4 Scene with down-sampled point cloud and patch of selected points (in cyan).

8.5 Points computed to be coplanar with selected points.

8.6 Scene of object on table viewed by Kinect camera.

8.7 Object-finder estimation of toy-block model frame using PCL processing. Origin is in the middle of the block, z axis is normal to the top face, and x axis is along the major axis.

9.1 Example computed triangular velocity profile trajectories

9.2 Example computed trapezoidal velocity profile trajectories

9.3 Example open-loop control of mobot. Intended motion is along x axis.

9.4 Example open-loop control of mobot with higher angular acceleration limit. Intended motion is along x axis.

9.5 Trajectory generation for 5 m × 5 m square path

9.6 Logic of state machine for publishing desired states

9.7 Re-published mobot Gazebo state and noisy state with robot executing square trajectory under open-loop control

9.8 Differential-drive kinematics: incremental wheel rotations yield incremental pose changes

9.9 Comparison of Gazebo ideal state and odom estimated state as simulated robot follows 5 m × 5 m square path

9.10 Ideal state and odom state estimates for 5 m × 5 m path diverge. Note 5 mm right wheel diameter error with odom estimate

9.11 Pose estimate and ideal pose (time 4330 to 4342) and noisy pose per GPS (time 4342 to 4355)

9.12 Convergence of pose estimate to GPS values with robot at rest

9.13 Convergence of heading estimate based on error between GPS and odometry estimates of translation

9.14 Pose estimate tracking after initial convergence

9.15 Distribution of candidate poses for mobot within map upon start-up. The candidate poses have large initial variance.

9.16 Distribution of candidate poses for mobot within map after motion. The candidate poses are concentrated in a small bundle near the true robot pose.

9.17 Offset response versus time, linear model, linear controller, 1 Hz controller, critically damped

9.18 Heading response versus time of linear system

9.19 Control effort history of linear system versus time. Note that spin command may exceed actual physical limitations

9.20 Offset versus time, non-linear model, 1 Hz linear controller. Response to small initial error is good, similar to linear model response.

9.21 Path following of non-linear robot with linear controller and small initial error. Convergence to precise path following is well behaved.

9.22 Phase space of linear control of non-linear robot. Note linear relation between displacement and heading error near the convergence region.

9.23 Linear controller on non-linear robot with larger initial displacement. Displacement error versus time oscillates.

9.24 Path from linear controller on non-linear robot with large initial error y versus x. Robot spins in circles, failing to converge on desired path.

9.25 State versus time for non-linear control of non-linear robot. Behavior at large initial errors is appropriate and different from linear control, while response blends to linearcontrol behavior for small errors near convergence.

9.26 Initial state of mobot in starting pen after launch. Note that in the rviz view, four frames are displayed, but these frames initially coincide.

9.27 Snapshot of progress of mobot steering with respect to state estimation from integrated AMCL and odometry. Note that the odometry frame, near the bottom of the rviz view, has translated and rotated significantly since startup, illustrating the amount of cumulative drift of odometry to this point.

9.28 Result of mobot steering with respect to state estimation from integrated AMCL and odometry. The odometry frame, lower right in the rviz view, has translated and rotated dramatically during navigation. Nonetheless, the robot's true pose, the AMCL pose estimate and the integrated AMCL and odometry pose estimate are shown to be in approximate agreement.

10.1 rviz view of map constructed from recorded data of mobot moving within starting pen

10.2 Initial view of gmapping process with mobot in starting pen

10.3 Map after slight counter-clockwise rotation

10.4 Map after full rotation

10.5 Map state as robot first enters exit hallway

10.6 Global costmap of starting pen

10.7 Global plan (blue trace) to specified 2dNavGoal (triad)

10.8 Global and local costmaps with unexpected obstacle (construction barrel)

10.9 Example move-base client interaction. Destination is sent as goal message to move-base action server that plans a route (blue trace) to the destination (triad goal marker).

11.1 Servo tuning process for one-DOF robot

11.2 Servo tuning process for one-DOF robotzoom on position response

11.3 Servo tuning process for one-DOF robot20 rad/sec sinusoidal command

11.4 Velocity controller tuning process for one-DOF robot

11.5 Force sensor and actuator effort due to dropping 10 kg weight on one-DOF robot

11.6 NAC controller response to dropping 10 kg weight on one-DOF robot

11.7 Seven-DOF arm catching and holding weight using NAC

11.8 Coarse trajectory published at irregularly sampled points of sine wave

11.9 Action-server linear interpolation of coarse trajectory

12.1 Gazebo and rviz views of rrbot

12.2 Gazebo, tf_echo and fk test nodes with rrbot

12.3 rviz, tf_echo and fk test of ABB IRB120

12.4 Two IK solutions for rrbotelbow up and elbow down

12.5 Example of eight IK solutions for ABB IRB120

12.6 Proposed NASA satellite-servicer arm, Gazebo view

12.7 Proposed NASA satellite-servicer arm, rviz view with frames displayed

12.8 Approximation of proposed NASA satellite-servicer arm

13.1 Conceptualization of a feed-forward network dynamic-programming problem

13.2 Simple numerical example of dynamic programming consisting of five layers with transition costs assigned to each allowed transition

13.3 First step of solution process. Minimum cost to go is computed for nodes H and I. Two of the four links from layer four to layer five are removed because they are provably suboptimal

13.4 Second step of solution process. Minimum cost to go is computed for nodes E, F and G in layer 3 (costs 2, 3 and 2, respectively). Three of the six links have been pruned, leaving only the single optimal option for each node in layer three. It can be seen at this point that node I will not be involved in the optimal solution.

13.5 Third step of solution process. Minimum cost to go is computed for nodes B, C and D in layer two (costs 4, 4 and 3, respectively). Six of the nine links have been pruned, leaving only the single optimal option for each node in layer two.

13.6 Final step of solution process. Minimum cost to go is computed from node A. Only a single path through the network constituting the optimal solution survives.

14.1 Gazebo view of Baxter simulator in empty world

14.2 Gazebo view of Baxter in tuck pose

14.3 rviz view of Baxter simulator model illustrating right-arm frames

14.4 rostopic echo of robot/joint_states for Baxter

14.5 Result of running pre_pose action client of Baxter arm servers

14.6 Result of running baxter_playback with pre_pose_right.jsp and pre_pose_left.jsp motion files

14.7 Result of running baxter_playback with motion file shy.jsp

15.1 Code hierarchy for object-grabber system

15.2 Torso frame and right_gripper frame for Baxter

15.3 Model coordinates obtained from Gazebo

15.4 rviz view of Baxter simulator in pre-pose, illustrating right hand and right gripper frames

15.5 Approach pose from action client object_grabber_action_client, with pre-positioned block at known precise coordinates

15.6 Grasp pose used by action client object_grabber_action_client

15.7 Depart pose used by action client object_grabber_action_client

15.8 Drop-off pose used by action client object_grabber_action_client

15.9 Initial state after launching UR10 object-grabber nodes

15.10 UR10 block approach pose

15.11 Grasp pose used by action client object_grabber_action_client

15.12 UR10 depart pose

15.13 Drop-off pose used by action client object_grabber_action_client

16.1 Point cloud including Kinect’s partial view of robot’s right arm

16.2 Offset perception of arm due to Kinect transform inaccuracy

16.3 Result of launching baxter_on_pedestal_w_kinect.launch

16.4 Result of launching coord_vision_manip.launch

16.5 Result of coordinator action service invoking perception of block

16.6 Result of coordinator action service invoking move to grasp pose

16.7 Result of coordinator action service invoking move to depart pose

16.8 Result of coordinator action server invoking object drop-off

17.1 Gazebo view of a mobile manipulator model

17.2 rviz and Gazebo views of mobile manipulator immediately after launch

17.3 rviz and Gazebo views of mobile manipulator stacking fetched block

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.238.76