1.1 Node topology as illustrated by rqt_graph
1.2 Output of rqt_console with minimal nodes launched.
1.3 Output of rqt_console with minimal nodes running and rosbag running
1.4 Output of rqt_console with minimal_subscriber and rosbag play of recorded (bagged) data
2.1 Output of rqt_console and action client and action server terminals with timer example.
3.2 STDR stalls at collision after executing forward motion command from start position
3.3 STDR after approximate 90-degree counter-clockwise rotation
3.4 STDR stalls again at collision after executing another forward motion command
3.5 STDR final pose after executing programmed speed control
3.6 STDR commanded and actual speed and yaw rate versus time
3.7 Gazebo display of empty world with gravity set to 0
3.8 Gazebo display after loading rectangular prism model
3.9 Gazebo display after loading rectangular prism and cylinder
3.10 Gazebo display of two-link, one-DOF robot URDF model
3.11 Gazebo display of minimal robot with minimal controller
3.12 Transient response of minimal robot with minimal controller
3.13 Gazebo display of minimal robot contacting rigid object
3.14 Contact transient when colliding with table
3.15 Transient response to step position command with ROS PD controller
3.16 Graphical display of URDF tree for mobot
3.17 Gazebo view of mobot in empty world
3.18 Response of mobot to step velocity commands in Gazebo simulation
3.19 Gazebo simulation of mobot with contacts display enabled
3.20 Gazebo display of a mobot in starting pen
3.21 Gazebo display of combined mobile base and minimal arm models
3.22 Gazebo display of Baxter robot model
3.23 Gazebo display of DaVinci robot model
3.24 Gazebo display of Atlas robot model
5.1 An rviz view of Atlas robot model with LIDAR sensor display
5.2 An rviz view of Atlas robot in lab with interpretation of actual LIDAR data
5.4 rviz view of simple mobile robot with one-DOF arm
5.5 Adding marker display in rviz
5.6 Markers displayed in rviz from example_marker_topic
5.7 Markers at height 1.0 after rosservice call
5.8 Screenshot of triad display node with triad_display_test_node
5.9 Adding interactive marker to rviz display
5.10 Display of interactive marker in rviz
5.11 Gazebo view of simple mobile robot with LIDAR sensor in a virtual world
5.12 Rviz view of simple mobile robot with LIDAR sensor data being displayed
5.13 Gazebo view of simple mobile robot in virtual world and display of emulated camera
5.15 View of rviz during addition of plug-in tool
5.16 rviz view showing selection of single LIDAR point to be published
6.1 Standard camera frame definition.
6.2 Gazebo simulation of simple camera and display with image viewer.
6.3 Camera calibration tool interacting with Gazebo model.
6.4 Rviz view of world frame and left-camera optical frame in simple stereo camera model.
6.5 Screenshot during stereo camera calibration process, using simulated cameras.
6.6 Result of running find_red_pixels on left-camera view of red block.
6.7 Result of running Canny edge detection on left-camera view of a red block.
7.1 Gazebo and rviz views of LIDAR wobbler–wide view.
7.2 Gazebo and rviz views of LIDAR wobblerzoomed view of sideways can on ground plane.
7.3 Gazebo view of stereo-camera model viewing can.
7.4 Display of can imagesright, left and disparity.
7.5 rviz view of three-dimensional points computed from stereo vision.
7.6 Selection of points in rviz (cyan patch)stereo vision view.
7.7 Selection of points in rviz (cyan patch)Kinect view.
8.1 Rviz view of point cloud generated and published by display_ellipse.
8.2 Rviz view of image read from disk and published by display_pcd_file.
8.3 Rviz view of image read from disk, down-sampled and published by find_plane_pcd_file.
8.4 Scene with down-sampled point cloud and patch of selected points (in cyan).
8.5 Points computed to be coplanar with selected points.
8.6 Scene of object on table viewed by Kinect camera.
9.1 Example computed triangular velocity profile trajectories
9.2 Example computed trapezoidal velocity profile trajectories
9.3 Example open-loop control of mobot. Intended motion is along x axis.
9.5 Trajectory generation for 5 m 5 m square path
9.6 Logic of state machine for publishing desired states
9.8 Differential-drive kinematics: incremental wheel rotations yield incremental pose changes
9.11 Pose estimate and ideal pose (time 4330 to 4342) and noisy pose per GPS (time 4342 to 4355)
9.12 Convergence of pose estimate to GPS values with robot at rest
9.14 Pose estimate tracking after initial convergence
9.18 Heading response versus time of linear system
10.1 rviz view of map constructed from recorded data of mobot moving within starting pen
10.2 Initial view of gmapping process with mobot in starting pen
10.3 Map after slight counter-clockwise rotation
10.5 Map state as robot first enters exit hallway
10.6 Global costmap of starting pen
10.7 Global plan (blue trace) to specified 2dNavGoal (triad)
10.8 Global and local costmaps with unexpected obstacle (construction barrel)
11.1 Servo tuning process for one-DOF robot
11.2 Servo tuning process for one-DOF robotzoom on position response
11.3 Servo tuning process for one-DOF robot20 rad/sec sinusoidal command
11.4 Velocity controller tuning process for one-DOF robot
11.5 Force sensor and actuator effort due to dropping 10 kg weight on one-DOF robot
11.6 NAC controller response to dropping 10 kg weight on one-DOF robot
11.7 Seven-DOF arm catching and holding weight using NAC
11.8 Coarse trajectory published at irregularly sampled points of sine wave
11.9 Action-server linear interpolation of coarse trajectory
12.1 Gazebo and rviz views of rrbot
12.2 Gazebo, tf_echo and fk test nodes with rrbot
12.3 rviz, tf_echo and fk test of ABB IRB120
12.4 Two IK solutions for rrbotelbow up and elbow down
12.5 Example of eight IK solutions for ABB IRB120
12.6 Proposed NASA satellite-servicer arm, Gazebo view
12.7 Proposed NASA satellite-servicer arm, rviz view with frames displayed
12.8 Approximation of proposed NASA satellite-servicer arm
13.1 Conceptualization of a feed-forward network dynamic-programming problem
14.1 Gazebo view of Baxter simulator in empty world
14.2 Gazebo view of Baxter in tuck pose
14.3 rviz view of Baxter simulator model illustrating right-arm frames
14.4 rostopic echo of robot/joint_states for Baxter
14.5 Result of running pre_pose action client of Baxter arm servers
14.6 Result of running baxter_playback with pre_pose_right.jsp and pre_pose_left.jsp motion files
14.7 Result of running baxter_playback with motion file shy.jsp
15.1 Code hierarchy for object-grabber system
15.2 Torso frame and right_gripper frame for Baxter
15.3 Model coordinates obtained from Gazebo
15.4 rviz view of Baxter simulator in pre-pose, illustrating right hand and right gripper frames
15.6 Grasp pose used by action client object_grabber_action_client
15.7 Depart pose used by action client object_grabber_action_client
15.8 Drop-off pose used by action client object_grabber_action_client
15.9 Initial state after launching UR10 object-grabber nodes
15.10 UR10 block approach pose
15.11 Grasp pose used by action client object_grabber_action_client
15.13 Drop-off pose used by action client object_grabber_action_client
16.1 Point cloud including Kinect’s partial view of robot’s right arm
16.2 Offset perception of arm due to Kinect transform inaccuracy
16.3 Result of launching baxter_on_pedestal_w_kinect.launch
16.4 Result of launching coord_vision_manip.launch
16.5 Result of coordinator action service invoking perception of block
16.6 Result of coordinator action service invoking move to grasp pose
16.7 Result of coordinator action service invoking move to depart pose
16.8 Result of coordinator action server invoking object drop-off
17.1 Gazebo view of a mobile manipulator model
17.2 rviz and Gazebo views of mobile manipulator immediately after launch
17.3 rviz and Gazebo views of mobile manipulator stacking fetched block
3.139.238.76