After installing the development operating system (Ubuntu), the target operating system (ROS), and their associated tools in the last chapter, we are going to “play” with the tools to build a very simple rover with RViz and drive it in the Gazebo simulator. We will also build, test, and run the chassis of the rover one part at a time.
Objectives
Understand the relationship between ROS, RViz, and Gazebo
Expand your understanding of ROS commands
Explore RViz to create a simple rover
Use Gazebo to move the rover in a simple virtual environment
ROS, RViz, and Gazebo
Figure 4-1 graphically describes our project’s major components and their interrelationships. The blue boxes are our physical computing systems (laptop and rover), which have Ubuntu operating systems . The orange boxes represent software components and libraries installed on each system. The yellow boxes are internal ROS tools for developing and testing ROS models. Once a virtual ROS model has been thoroughly vetted, the executable script is transferred to the physical rover (green arrow). Assuming everything is working, our rover will be able to move about in the real world and transmit data back to the laptop (gray arrow). The gray arrow represents the “human-in-the-loop” decisions that might be used to control the rover, such as “Start,” “Come Home,” or “Pause.” Gazebo will allow us to view the effects of physics on the chassis, simulate the power applied to each motor, and simulate the algorithms.
Essential ROS Commands
Essential ROS Commands
Command | Format | Action |
---|---|---|
roscore | $roscore | Starts master node |
rosrun | $rosrun [package] [executable] | Executes a program and creates nodes |
rosnode | $rosnode info [node name] | Shows information about active nodes |
rostopic | $rostopic <subcommand> <topicname> subcom: list, info, & type | Information about ROS topics |
rosmsg | $rosmsg <subcom> [package]/ [message] subcom: list, info, & type | Information on message types |
rosservice | $rosservice <subcom> [service] subcom: args, call, find, info, list, and type | Runtime information being displayed |
rosparam | $rosparam <subcom> [parameter] | Get and set data used by nodes |
Rather than go into the details of each command, we will explore them more deeply when we use them in the text.
Robot Visualization (RViz)
We will build the simplified virtual rover shown in Figure 4-2 using RViz, a 3D modeling tool for ROS . RViz designs and simulates 3D components, such as wheels and sensors. Besides defining the dimensions of the components (HxWxD), we can model characteristics (color, material, etc.) and behavior (speed, intelligence, etc.). RViz can display 2D and 3D data from optical cameras, IR sensors, stereo cameras, lasers, radar, and LiDAR. RViz lets us build and test individual components and systems. It also offers limited testing of component interactions in the environment. Finally, RViz tests both the virtual and physical rover. Thus, we can catch design and logic errors in the simulator before and after building the hardware. We can debug the AI rover’s sub-system nodes and routines inexpensively using RViz.
Finally, in terminal 3, we verify that roscore is communicating with rviz by running the rostopic list (yellow). The output shown lists the active pipelines between the nodes running in ROS—those in the yellow boxes belong to rviz and roscore. A pipeline is a computer science term that describes the dedicated pathway for passing messages between components. We will be using these pipelines later on, along with rostopic, to look at the messages being passed.
After clicking on the RViz quick launch icon, the RViz program runs by executing the rosrun rviz rviz command , as in Figure 4-4.
If the rosrun rviz rviz command generates an error message, verify the line ~/catkin_ws/devel/setup.bash is in your .bashrc file in your home directory.
If the rosrun rviz rviz command still does not work, then reinstall the entire ros-noetic-desktop-full installation package. Examine the printout and determine if there are any installation errors following the ros-noetic-desktop-full reinstall.
Interact: Reveal interactive markers.
Move Camera: Move the camera around in the Views panel with the mouse or keyboard.
Select: Point-and-drag a wireframe box around the 3D objects.
Focus Camera: Focus on a point or an object.
Measure: Measure distances between objects.
2D Pose Estimate: Determine or plan the distance the rover traverses.
Publish Point (Not Seen): Publishes coordinates of an object.
Rviz tutorials can be found at the following locations:
http://wiki.ros.org/RViz/Tutorials and http://wiki.ros.org/RViz/UserGuide
Catkin Workspace Revisited
- 1.
cd ~/catkin_ws/src
- 2.
catkin_create_pkg ai_rover ‘ new line
- 3.
cd ~/catkin_ws
- 4.
mkdir src/ai_rover/urdf ' new line
- 5.
mkdir src/ai_rover/launch ' new line
- 6.
catkin_make
This is the “required” folder structure for ROS projects. The root directory is catkin_ws and is hardcoded in the catkin_make script . The build and devel directories contain libraries and scripts needed to compile and execute projects. When developing ROS scripts applicable to all packages, we store the files in the src directory. The ai_rover (sub-)directory contains scripts specific to the AI rover project. The URDF directory contains the description of the rover components. There are two other files of interest: CMakeLists.txt and package.xml. DO NOT EDIT! CMakeLists.txt is created in two folders, src and ai_rover, and is used to compile scripts in their respective folders. The other file, package.xml, sets up the XML system for the AI rover.
The Relationship Between URDF and SDF
The SDF file uses the initial static, dynamic, and kinematic characteristics of the AI rover described in the URDF to initialize the animated AI rover in Gazebo . For example, sensor-, surface-, texture-, and joint-friction properties all can be defined within the URDF file and converted into an SDF file. We can define dynamic effects in the URDF file that might be found within the environment, such as cave-ins, collapsing floors, and explosions caused by methane build-ups. Whenever you want to add a component to the AI rover, you put it into the URDF and then convert it to the SDF .
Building the Chassis
Two required components need to be modeled in each URDF file. The link component is responsible for describing the static physical dimensions, orientation, and material of each object. The joint component describes dynamic physics, such as the amount of friction and rotational characteristics between objects.
This describes our chassis as a 3D box 0.5 m long, 0.5 m wide, and 0.25 m tall located at the origin (0,0,0) with no rotation (no roll, no pitch, no yaw). (Most simulators use the metric system.) The chassis’ base_link is the link component. All other link components will be defined relative to this base_link. Constructing the rover is similar to building a robot in real life; we will add pieces to the chassis to customize our rover. We use this initial base_link of the chassis to define the AI rover’s initial position.
Using the ROSLAUNCH Command
Import the ai_rover.urdf model.
Start the joint_state_publisher, robot_state_publisher, and the RViz 3D CAD environment.
ALL URDF/SDF files must be executable:
$ sudo chmod +rwx RViz.launch
Select the Add button and add RobotModel.
Select the Add button and add TF.
Finally, go to the Global Options ➤ Fixed Frame option and change the value to base_link.
There should now be a box on the main screen. Save your work!
Creating Wheels and Drives
Planar Joint : This joint allows movement in a plane. An example of this would be an elbow joint. (one DoF: rotate)
Floating Joint : This type of joint allows motion in all six DoF (translate, rotate for each axis). An example of a joint such as this would be a wrist.
Prismatic Joint : This joint slides along an axis and has a limited upper and lower range of distance to travel. An example of this would be a spyglass telescope. Think pirate telescope. (two DoF: translate and rotate)
Continuous Joint : This joint rotates around the axis like the wheels of a car and has no upper or lower limits. (one DoF: rotate)
Revolute Joint : This joint rotates around an axis, similar to continuous, but has upper and lower bounds of angles of rotation. For example, a volume knob. (one DoF: rotate)
Fixed Joint : This joint cannot move at all. All degrees of freedom are locked. An example would be the static location of a mirror on a car door. (zero DoF)
Each wheel has two parts, the link and the joint.
The <link> of each wheel is defined as a cylinder with a radius of 0.2 m and a length of 0.1 m. Each wheel is located at (0, ±0.3, 0) and is rotated by π/2 (1.57...) radians or 90 degrees about the x-axis.
The <joint> of each wheel defines the axis of rotation as the y-axis and is defined by the XYZ triplet “0, 1, 0”. The <joint> elements define the kinematic (moving) parts of our model, with the wheels rotating around the y-axis.
The URDF file is a tree structure with the AI rover’s chassis as the root (base_link), and each wheel’s position is relative to the base link.
Our simplified virtual model’s dimensions are not the same as the physical dimensions of the physical rover. This might cause some issues with training deep learning and cognitive networks. We will discuss these issues in Chapter 12 and beyond.
Verify and launch the modified code. Your Rviz display should be similar to Figure 4-10. If you do not receive a “Successfully Parsed” XML message, review your file for errors, such as spelling and syntax; i.e., forgetting a “>” or using “” instead of “/”.
Always test file correctness after every new component added. For example, if you add the left wheel immediately check the correctness of the XML source code within the URDF file by executing the following:
$ check_urdf ai_rover.urdf
$ roslaunch ai_rover ai_rover.urdf
These two commands (check_urdf and roslaunch ai_rover) should be executed each time the file is modified. We will use “verify and launch” as shorthand for these two commands.
Creating AI Rover’s Caster
We now have the two wheels successfully attached to the AI rover’s chassis . To mimic the physical GoPiGo rover, we will add a caster on the lower-back bottom of the AI rover’s chassis for “balance.” We could add a powered caster as a joint to add actuated turning, but this is still too complex. Instead, we will add the caster as a visual element and not as a joint. The caster slides along the ground plane as the wheels control the direction.
Adding Color to the AI Rover (Optional)
Collision Properties
Our simple model is finished enough to define the collision properties for the model—think of a collision property as a “bounding box.” The bounding box is the smallest box/sphere/cylinder that surrounds our model’s components, and the sum of the bounding boxes for the components is the bounding box for the rover. To do this, we add <collision> properties to each component. The collision properties are defined for Gazebo’s collision-detection engine. For each simulation time frame, the components are checked for a collision. Modeling our AI rover as many simple components optimizes collision detection.
Verify and launch. Since the collision properties affect the dynamic physics, not the looks, you will not see any visual differences! This allows them to “bump” into other objects.
Testing the AI Rover’s Wheels
We will call this verify and launch–GUI. We can visualize movement!
If you get a “GUI has not been installed or available” error message, run the following:
$ sudo apt-get install ros-noetic-joint-state-publisher-gui
This forces the GUI to install.
joint_right_wheel: Set the angle of the wheel between ±π.
joint_left_wheel: Set the angle of the wheel between ±π.
Randomize: Randomly assign a value between ±π for each independent wheel.
Center: Set both wheels to zero radians.
Physical Properties
Notice that our wheels are spinning, but the AI rover chassis is not moving. To see the movement, we need to do two things: add physics properties and run the AI rover in Gazebo. RViz visualizes the components but does not show the physics (movement); we need to add inertial properties (mass and inertia) for each component.
An object’s Inertial is calculated from its weight and how much it resists acceleration or deceleration. For simple objects with geometric symmetry, such as a cube, cylinder, or sphere, the moment of inertia is easy to calculate. Because we modeled the AI rover with simple components, Gazebo’s optimized physics engine quickly calculates the moment of inertia.
- <inertial>
<mass>: Weight of the object measured in kilograms.
<inertia>: The frame of a 3X3 rotational inertia matrix. The moment of inertia is defined for 3D space.
</inertial>
IXX | Ixy | Ixz |
---|---|---|
Ixy | IYY | Iyz |
Ixz | Iyz | IZZ |
Each component has been defined with its unique mass and moment of inertia values. Verify and launch–GUI! We should see the same display in RViz and the GUI tester (Figure 4-13).
Gazebo Introduction
The URDF file describes the static (color, size, etc.) and dynamic (inertial) properties of the components. Convert the URDF file into the Simulation Description Format (SDF ) file for Gazebo.
Background Information on Gazebo
Development of deep learning algorithms
Development of control algorithms
Simulation of sensor data for LiDAR systems, cameras, contact sensors, proximity sensors, etc.
Advanced physics engines via open dynamics engine
Now we are reviewing the actual process of loading the URDF description of the AI rover into Gazebo. We will first test the AI rover model by taking control of the wheels to move, in a limited fashion, the AI rover model within a simulated world with obstacles. This will be done at first without the use of a two-wheeled differential-drive control system . We will develop that later, in the advanced sections of this chapter, by extending our AI rover model to have the independent ability to control its very own continuous wheel joints, graph sensor data, and verify and validate control and deep learning algorithms.
Starting Gazebo
If Gazebo is not installed, refer to Chapter 3.
Every time that Gazebo is run, two different processes are created. The first is the Gazebo Server (gzserver), which is responsible for the overall simulation. The second process is the Gazebo Client (gzclient), which starts the USER GUI used to control the AI rover.
If you execute the $ Gazebo Linux terminal command and get a series of errors or warning messages, you may have previous incarnations of ROS nodes running. Execute the $ rosnode list command to determine if there are any previously running nodes. If there are any ROS nodes still active, simply execute $ rosnode kill -a. This command kills all running ROS nodes. Then, simply run the $ Gazebo command once again. Be certain to always check for any node warning messages.
There are two main areas: the simulation display window and the tabs panel. The simulation display window is where our generated world (and rover) will be displayed. The toolbar that is located at the very top of the simulation display window symbols controls the simulated world. (Note the little red box; we will come back to this a moment.) The tabs panel has three tabs: World, Insert, and Layers.
The World tab provides hierarchical access to sub-elements, such as GUI, Scene, Spherical Coordinates, Physics, Models, and Lights. While all of these categories are fascinating, at this time we are interested in the Models tab—where our AI rover model resides. We will introduce other categories as needed.
The Insert tab gives access to models developed by us (local) and others (cloud, located at http://gazebosim.org/models ). These models may be inserted into our active world.
The Layers tab allows toggling between different visual parts of our simulated world. We use this to “debug” our world view; for instance, determining if there are any unexpected collisions. The Layers tab initially contains no layers. As we develop our world further, we can add layers.
Gazebo Environment Toolbar
The toolbar is located at the very top of the Gazebo environment . Let’s review the following symbols that can be seen from left to right within the Gazebo toolbar . These symbols have the following capabilities and can also be seen in Figure 4-17.
Selection Mode: This mode selects the 3D AI rover or its components within the Gazebo environment. The properties of the AI rover or its components are listed within the World panel.
Translation Mode: This mode selects the AI rover or its components when a cursor is clicked around any part of the AI rover. There will be a 3D box wrapped around the selected component or even the AI rover itself. We can then move any part of the AI rover to any position required.
Rotation Mode: This mode is responsible for selecting the AI rover model when a cursor selects and draws a box around it. You can then rotate the AI rover model on either its roll, pitch, or yaw axis.
Scale Mode: This mode can select the AI rover sub-components, such as the box component. The scaling operation only works with very simple 3D shapes, such as a cube in the case of the chassis for the AI rover.
Undo Command: This will undo the very last action committed by the developer. We can repeat the undo operation to undo a series of actions in a linear format.
Redo Command: This likewise will redo the last action that was deleted by the undo command. So it will reverse and restore what was eliminated by the undo command.
Box, Sphere, and Cylinder Modes: These next three modes found by their shapes allow one to automatically create these shapes with varying dimensions within the Gazebo environment. The scaling mode can be used to modify the dimensions of these simple shapes.
Lighting Mode: This allows one to change the angle and intensity of light within the Gazebo environment .
Copy Mode: Copies the selected items within the Gazebo environment.
Paste Mode: This mode pastes the copied item onto the Gazebo environment.
Selection and Alignment Mode: This mode will align two objects with each other in either the x, y, or z-axis.
Join Mode: This mode will allow one to select the location as to where two objects will be joined.
Alter View Angle Mode: This mode will allow one to change the angle of view for the user.
Screenshot Mode: This mode will take a screenshot of the simulation environment for documentation purposes. All files are saved within the ~/gazebo/pictures directory.
Log Mode: This information will take all of the data and simulation values being generated and store them in the ~/gazebo/log directory. This will be used to debug the deep learning routines for the AI rover.
The Invisible Joints Panel
Force: Defined as a force in Newtons per meter (N/m) applied to each continuous joint.
Position: <x,y,z> 3D coordinates and <roll,pitch,yaw> rotation.
Velocity: Speed of Joint in meters per second (m/s). These can also be set by the PID values.
The Gazebo Main Control Toolbar
File has the sub-functions of Save World, Save World As, Save Configuration, Clone World, and Quit.
Edit has the sub-functions of Reset Model Poses, Reset World, Building Editor, and Gazebo Model Editor.
Camera has the sub-functions of Orthographic, Perspective, FPS View Control, Orbit View Control, and Reset View Angle.
View has the sub-functions of Grid, Origin, Transparent, Wireframe, Collisions, Joints, Center of Mass, Inertias, Contacts, and Link Frames.
Window has the sub-functions of Topic Visualization, Oculus Rift Virtual Reality Viewer, Show GUI Overlays, Show Toolbars, and Full Screen.
Help has the sub-functions of Hot Key Chart and Gazebo About.
Now that we have reviewed the controlling toolbar functions, we must transition how we can run simulations and play back simulation runs. We must be able to modify the URDF file of the AI rover into a form that is compatible with the Gazebo simulation environment by transforming that very same URDF file into an SDF (Simulation Defined Format) file.
URDF Transformation to SDF Gazebo
Now we must transform the AI rover’s URDF file so that it can be accepted and processed by the Gazebo environment . We must convert the URDF to an SDF file . We must state to the reader that the SDF expression is an extension to that of URDF, by using the same XML extensions provided. By making the appropriate modifications of the URDF file describing the AI rover, it will allow the Gazebo environment to convert the URDF to the required SDF robot expression for the AI rover. We will now describe the required steps to transform URDF files into SDF files .
To allow this transformation to be complete, we must add the correct <Gazebo> tags to the URDF file that describes the AI rover chassis, wheels, and caster within the Gazebo simulator. It should be stated that the chassis of the AI rover not only includes the physical box of the AI rover but would also include the mass and moment of inertia of the embedded electronics, such as the Raspberry Pi . The use of the <Gazebo> tag allows one to transform elements found in SDF but not in URDF. If a <Gazebo> tag is used without a reference="" property, then the <Gazebo> tag is concerned with the entire AI rover model. The reference parameter usually refers to joints such as the wheels defined within the AI rover URDF file. We can also define both links and joints found within SDF that are not found within the URDF file describing the AI rover. With this extension found within SDF , we can develop sophisticated simulations of a deep learning controller controlling the AI rover within a Gazebo environment. We will review in this and the next chapter some of the tutorials found within http://gazebosim.org/tutorials/?tut=ros_urdf for a list of elements such as links and joints that can be used to even further enhance the simulations of the AI rover. Examples of links and joints would be the fixed sensors and dynamic actuators for the AI rover.
Gazebo tags would have to be defined before the ending </robot> tag for the entire model for the AI rover. Therefore, all Gazebo tags should be defined near the end of the file before the ending </robot> tag. However, there are caveats with the other elements in Gazebo.
The Gazebo simulator will use neither the <visual> nor the <collision> elements if they are not specified for each link, such as the AI rover 3D chassis box or the caster. If links such as these are not specified, then Gazebo will regard them as being invisible to sensors such as lasers and simulated environment collision checking.
Checking the URDF Transformation to SDF Gazebo
We will first test to determine if the Gazebo references work for the color schemes for the chassis and wheels. We will also use Gazebo references to develop the differential drive controller for the AI rover itself by this chapter’s end.
Once we have this URDF file with the first Gazebo extensions created, we must then convert this to an SDF file , to be certain that there are no issues with the transformation process for processing by Gazebo. We will then execute the following command: $ gz sdf -p ai_rover Gazebo. Once we execute this command in the correct directory, we should have a terminal prompt listing of the correct and equivalent SDF file being generated with no printed errors. Once we have reached this point of generating an SDF file , we must now develop the required launch and simulation files for starting our initial ROS simulation within Gazebo.
First Controlled AI Rover Simulation in Gazebo
As we are developing our first controlled AI rover simulation within Gazebo, we must develop two files that separate two steps for creating this simulation environment. We must first develop the launch file to launch and view the AI rover, the environment, and any obstacles or mazes presented within the simulated environment. The second file will describe what the Gazebo simulation world will contain, such as mazes, obstacles, and dangers. We should note that the second Gazebo simulation file will likewise be launched by the first launch file developed. We should also be aware that the launch file should be located within the launch directory and the Gazebo obstacle simulation file should be located within the worlds directory, all of which are sub-directories to the ai_robotics directory.
This launch file will launch the empty worlds that are contained within the gazebo_ros package. We can also develop a world that will contain the Egyptian catacomb layout by replacing the ai_rover.world file. The URDF with the <gazebo> extension tags model of the AI rover will be launched within the empty worlds by the spawn_model service from the gazebo_ros Noetic ROS node.
First Deep Learning Possibility
Now that we have developed the first AI rover Gazebo simulation setup , we must experiment to explore methods to cause locomotion, and then eventually intelligent navigation, obstacle avoidance, and ultimately sense-and-avoid cognitive capabilities. The first use of a deep learning controller might take the form of controlling any type of unexpected behavior of the AI rover within Gazebo. Unexpected behavior would include not traveling within a straight line while navigating obstacles. This is because the URDF file with the <Gazebo> extension tags might need further tuning to represent the physics within Gazebo. We might need to develop an intelligent and adaptive deep learning controller that controls the AI rover. We might need to modify properties such as the mass distribution and moment of inertia values for the AI rover. If these values were constantly changing, we would need a controller that would likewise adapt accordingly.
Moving the AI Rover with Joints Panel
Summary
We have achieved a lot within the pages of Chapter 4. We have reviewed how to develop a model for the AI rover with URDF. We have shown how to extend a URDF file with <Gazebo> tags to allow for Gazebo simulations. We have evaluated the functionality of the 3D environment for designing models such as Rviz. We have reviewed the process of developing and deploying models created in Rviz to Gazebo. We have worked with multiple ROS commands to launch these simulations. We will see in Chapter 5 how we can use the XML macro (Xacro) languages to develop even more sophisticated AI rover simulations, by allowing the AI rover, sensors, actuators, and simulated environments to be developed more efficiently. We will also use more examples of UML modeling for these very same Xacro files.
Exercise 4.1: What additional changes would you make to the ai_rover.world file to include obstacles other than the construction cones?
Exercise 4.2: What additional changes would you make to spawn an additional number of construction cones within the ai_rover.world? How can you place them differently or symmetrically, etc.?
Exercise 4.3: How does the use of the Joints panel highlight the need for a controller and driver for the differential-wheeled system? Why can we not develop a differential driver within Rviz?
Exercise 4.4: Why do we need to verify and validate both the URDF and SDF files being developed with tools such as check_urdf?