© David Allen Blubaugh, Steven D. Harbour, Benjamin Sears, Michael J. Findler 2022
D. A. Blubaugh et al.Intelligent Autonomous Drones with Cognitive Deep Learninghttps://doi.org/10.1007/978-1-4842-6803-2_4

4. Building a Simple Virtual Rover

David Allen Blubaugh1  , Steven D. Harbour2, Benjamin Sears3 and Michael J. Findler4
(1)
Springboro, OH, USA
(2)
Beavercreek Township, OH, USA
(3)
Xenia, OH, USA
(4)
Mesa, AZ, USA
 

After installing the development operating system (Ubuntu), the target operating system (ROS), and their associated tools in the last chapter, we are going to “play” with the tools to build a very simple rover with RViz and drive it in the Gazebo simulator. We will also build, test, and run the chassis of the rover one part at a time.

Objectives

The following are the objectives required for successful completion of this chapter:
  • Understand the relationship between ROS, RViz, and Gazebo

  • Expand your understanding of ROS commands

  • Explore RViz to create a simple rover

  • Use Gazebo to move the rover in a simple virtual environment

ROS, RViz, and Gazebo

As a reminder, ROS stands for Robot Operating System. Our rover will use ROS as its operating system. An operating system is software that connects the different components of the system. The components can be hardware (motors, sensors, etc.), software (neural network libraries, etc.), or “squishy-ware” (humans). [Although the last term is added for humor, the operating system is key to interacting with the user.] RViz and Gazebo are software tools used in the development of an ROS robot (Figure 4-1). RViz is used to build models of our virtual rover. Another way to think about the two programs is that RViz explores individual object(s) in a controlled space (lab), while Gazebo puts the objects in a chaotic “real-world” environment with little control of interactions.

An illustration of relationships between the components. On the left, the Ubuntu laptop has gedit, python, libraries, and R O S virtual rover dev slash test with R Viz and Gazebo. On the right, the physical rover has sensor libraries, R O S, and O p s. Ubuntu laptop points via X f e r execs and a double-headed feedback loop to the physical rover.

Figure 4-1

Development system relationships

Figure 4-1 graphically describes our project’s major components and their interrelationships. The blue boxes are our physical computing systems (laptop and rover), which have Ubuntu operating systems . The orange boxes represent software components and libraries installed on each system. The yellow boxes are internal ROS tools for developing and testing ROS models. Once a virtual ROS model has been thoroughly vetted, the executable script is transferred to the physical rover (green arrow). Assuming everything is working, our rover will be able to move about in the real world and transmit data back to the laptop (gray arrow). The gray arrow represents the “human-in-the-loop” decisions that might be used to control the rover, such as “Start,” “Come Home,” or “Pause.” Gazebo will allow us to view the effects of physics on the chassis, simulate the power applied to each motor, and simulate the algorithms.

Essential ROS Commands

Table 4-1 lists the ROS commands we will use frequently. These ROS commands allow us to control, analyze, and debug nodes contained in a package. A node is a self-contained model of a sub-part of the system (package). A package contains the different models we are using to describe our rover. For instance, our rover will have a model composed of a chassis, wheels, sensors, etc., which are the sub-models. Each of these sub-models may have physics models applied, such as speed and acceleration. Furthermore, our rover will interact with walls, holes, and obstacle models. All of these models and sub-models make up our rover package. The nodes mentioned in Table 4-1 usually correspond to a “physical” sub-model, such as the wheels.
Table 4-1

Essential ROS Commands

Command

Format

Action

roscore

$roscore

Starts master node

rosrun

$rosrun [package] [executable]

Executes a program and creates nodes

rosnode

$rosnode info [node name]

Shows information about active nodes

rostopic

$rostopic <subcommand> <topicname>

subcom: list, info, & type

Information about ROS topics

rosmsg

$rosmsg <subcom> [package]/ [message]

subcom: list, info, & type

Information on message types

rosservice

$rosservice <subcom> [service] subcom: args, call, find, info, list, and type

Runtime information being displayed

rosparam

$rosparam <subcom> [parameter]

Get and set data used by nodes

Rather than go into the details of each command, we will explore them more deeply when we use them in the text.

Robot Visualization (RViz)

The final model of our “simplified virtual rover ” is composed of four sub-components (Figure 4-2): a chassis, a castor, and two wheels. The different components are modeled as a box, sphere, and disks. I think one of the key takeaways here is that the model does not have to look like the physical rover. To quote British statistician George Box, “All models are wrong, but some are useful.” We have a very useful rover that tests only the essentials.

An illustration of a simplified virtual rover in the virtual world with a grid. Its components are labeled as follows. Wheels on the left and right; chassis in the center; castor at the bottom.

Figure 4-2

Simplified virtual rover we are going to build

We will build the simplified virtual rover shown in Figure 4-2 using RViz, a 3D modeling tool for ROS . RViz designs and simulates 3D components, such as wheels and sensors. Besides defining the dimensions of the components (HxWxD), we can model characteristics (color, material, etc.) and behavior (speed, intelligence, etc.). RViz can display 2D and 3D data from optical cameras, IR sensors, stereo cameras, lasers, radar, and LiDAR. RViz lets us build and test individual components and systems. It also offers limited testing of component interactions in the environment. Finally, RViz tests both the virtual and physical rover. Thus, we can catch design and logic errors in the simulator before and after building the hardware. We can debug the AI rover’s sub-system nodes and routines inexpensively using RViz.

To start ROS communicating with the Rviz, we will open three terminal windows (Figure 4-3) using Terminator. In terminal 1, we start the master node with the roscore command (orange). In terminal 2, we start the RViz program with rosrun rviz rviz (red). The rosrun command takes two arguments: the ROS package the script is located in (rviz) and the script to run (rviz). The program RViz will “pop up” on your screen. Minimize it to run the final command.

A screenshot of ros at the rate ros-Virtual box with 3 terminal windows. The top left window is labeled 1 and the dollar roscore is highlighted in a box. The bottom window is labeled 2 and the dollar rosrun r viz r viz is highlighted in a box. The top right window is labeled 3 and the dollar rostopic list is highlighted in a box.

Figure 4-3

Terminator displaying three terminals

Finally, in terminal 3, we verify that roscore is communicating with rviz by running the rostopic list (yellow). The output shown lists the active pipelines between the nodes running in ROS—those in the yellow boxes belong to rviz and roscore. A pipeline is a computer science term that describes the dedicated pathway for passing messages between components. We will be using these pipelines later on, along with rostopic, to look at the messages being passed.

After clicking on the RViz quick launch icon, the RViz program runs by executing the rosrun rviz rviz command , as in Figure 4-4.

Note

If the rosrun rviz rviz command generates an error message, verify the line ~/catkin_ws/devel/setup.bash is in your .bashrc file in your home directory.

If the rosrun rviz rviz command still does not work, then reinstall the entire ros-noetic-desktop-full installation package. Examine the printout and determine if there are any installation errors following the ros-noetic-desktop-full reinstall.

A screenshot of R Viz window. Below the menu bar, it has options for interact, move camera, and so on. On the left, it has global options, global status, grid, and add button. In the center, it has an empty virtual world with grid. On the right, it has the current view. At the bottom it has ROS and wall time, ROS and wall elapsed.

Figure 4-4

RViz user interface

There are four default panels in Figure 4-4: Tools (orange); Views (blue); Displays (yellow); and Time (green). We will ignore the Time panel since we will not use it. The center window is not a panel; it is the visualization of our virtual world. It is currently empty, with a grid placeholder. Assuming you understand Files and Help menu items, the only interesting menu item is Panels. The Panels menu item opens and closes different “panels.” Panels are different ways of interacting with the current model. We will explain the different panels in more depth as needed. The Tools panel lets us work/experiment with objects in the Views panel:
  • Interact: Reveal interactive markers.

  • Move Camera: Move the camera around in the Views panel with the mouse or keyboard.

  • Select: Point-and-drag a wireframe box around the 3D objects.

  • Focus Camera: Focus on a point or an object.

  • Measure: Measure distances between objects.

  • 2D Pose Estimate: Determine or plan the distance the rover traverses.

  • Publish Point (Not Seen): Publishes coordinates of an object.

Note

Rviz tutorials can be found at the following locations:

http://wiki.ros.org/RViz/Tutorials and http://wiki.ros.org/RViz/UserGuide

The Displays panel interactively adds, removes, and renames the interactions of components of objects modeled in the virtual world (Figure 4-4). In other words, when you create a rover chassis it is modeled as a RobotModel. The Displays panel allows you to display the chassis axis, speed vector, etc. for a given object. The Add button presents appropriate graphical elements for your modeled object (in this case, the default grid), such as Camera, PointCloud, RobotModel, Axes, and Map. Selecting an element provided in the Display panel will show the description box (Figure 4-5). If you select OK, a 3D axis will display the default grid to show its orientation in the virtual world.

A screenshot of the R Viz window with display options and time. It has create visualization dialog box with by display type and by topic tabs. By display type has various options in which axes is selected. Below them, description is displays an axis at the target frame's origin and display name is axes. At the bottom, it has cancel and O K buttons.

Figure 4-5

Create visualization options by display types and by topic

The right side of the RViz graphical user interface (GUI) layout is concerned with the Views panel (Figure 4-6). It controls which camera we are using to view the virtual world. The default is Orbit, which simulates a camera in “orbit” around our world. The other two cameras we might use are the FPS (first-person shooter) and ThirdPersonFollower. These are “gaming” terms. To understand these terms, imagine a scene of a murder. The scene has a perpetrator (first-person shooter), a victim (second person), and a witness (third person). So the FPS camera is from the object’s eyes, while the ThirdPersonFollower is from the perspective of someone “witnessing” the actions (a third person).

A screenshot of the R Viz window with the view options. It has a menu in which orbit, r viz is selected. The text under current view and orbit, r viz includes distance, 10; focal shape size, 0.05; focal shape fixed size; yaw, pitch, and field of view, 0.785398; focal point, 0. At the bottom, it has save, remove, and rename buttons.

Figure 4-6

View options and time displays

Catkin Workspace Revisited

Recall creating a Catkin workspace in Chapter 3 for our quick testing of RViz and Gazebo. It was six steps, so we will add six steps to organize our project, as follows:
  1. 1.

    cd ~/catkin_ws/src

     
  2. 2.

    catkin_create_pkg ai_rover ‘ new line

     
  3. 3.

    cd ~/catkin_ws

     
  4. 4.

    mkdir src/ai_rover/urdf ' new line

     
  5. 5.

    mkdir src/ai_rover/launch ' new line

     
  6. 6.

    catkin_make

     
After executing these six commands, you will have the following directory structure (Figure 4-7). The important folder names are in bold, and a description of each folder is at the bottom of the box. (Folders we will not be explicitly using are left off Figure 4-7 for simplicity.) The fourth step executes the catkin_make file generated during the second step, catkin_create_pkg. The catkin_make script generates the other folders and related files.

An organizational chart of the A I rover package. On the top, it has catkin w s, root directory of our project that divides into build, used by ROS to store package executables; devel, used by ROS to store intermediate files; s r c, location of ROS package source code. S r c includes a i rover that divides into launch and u r d f.

Figure 4-7

Simplified folder organization for AI rover package

This is the “required” folder structure for ROS projects. The root directory is catkin_ws and is hardcoded in the catkin_make script . The build and devel directories contain libraries and scripts needed to compile and execute projects. When developing ROS scripts applicable to all packages, we store the files in the src directory. The ai_rover (sub-)directory contains scripts specific to the AI rover project. The URDF directory contains the description of the rover components. There are two other files of interest: CMakeLists.txt and package.xml. DO NOT EDIT! CMakeLists.txt is created in two folders, src and ai_rover, and is used to compile scripts in their respective folders. The other file, package.xml, sets up the XML system for the AI rover.

The Relationship Between URDF and SDF

The Universal Robot Description Format (URDF) file describes the logical structure of the AI rover. The Rviz-readable URDF file is formatted in Extensible Markup Language (XML) , which is a set of rules for encoding objects in a human-readable format. The URDF file contains the static dimensions of all the environment objects, such as walls, obstructions, and the AI rover and its components, along with any dynamic parameters used by those objects. The UDRF is the description (model) of the initial state of the AI rover and the environment. However, to dynamically run our AI rover in Gazebo, we need to convert the URDF file to a Simulation Description File (SDF) using GZSDF (Figure 4-8).

An illustration of the relationship between U R D F and S D F. From the left, a rectangle labeled R Viz build model, asterisk dot U R D F points G Z S D F to a rectangle labeled gazebo run model, asterisk dot S D F.

Figure 4-8

Relationship between RViz and Gazebo in ROS development

The SDF file uses the initial static, dynamic, and kinematic characteristics of the AI rover described in the URDF to initialize the animated AI rover in Gazebo . For example, sensor-, surface-, texture-, and joint-friction properties all can be defined within the URDF file and converted into an SDF file. We can define dynamic effects in the URDF file that might be found within the environment, such as cave-ins, collapsing floors, and explosions caused by methane build-ups. Whenever you want to add a component to the AI rover, you put it into the URDF and then convert it to the SDF .

Building the Chassis

Two required components need to be modeled in each URDF file. The link component is responsible for describing the static physical dimensions, orientation, and material of each object. The joint component describes dynamic physics, such as the amount of friction and rotational characteristics between objects.

The chassis of our AI rover is a very simple 3D box. (Download the source code at Apress: https://github.com/Apress/Intelligent-Autonomous-Drones-with-Cognitive-Deep-Learning ) Change to the URDF directory, by entering the following terminal commands (cd ~/catkin_ws/src/ai_rover/urdf). Create the ai_rover.urdf file using Gedit. Type the following terminal commands and enter the code:
<?xml version='1.0'?>
<robot name="ai_rover">
    <!-- Base Link -->
    <link name="base_link">
        <visual>
           <origin xyz ="0 0 0" rpy="0 0 0" />
               <geometry>
                    <box size="0.5 0.5 0.25"/>
               </geometry>
           </visual>
    </link>
</robot>

This describes our chassis as a 3D box 0.5 m long, 0.5 m wide, and 0.25 m tall located at the origin (0,0,0) with no rotation (no roll, no pitch, no yaw). (Most simulators use the metric system.) The chassis’ base_link is the link component. All other link components will be defined relative to this base_link. Constructing the rover is similar to building a robot in real life; we will add pieces to the chassis to customize our rover. We use this initial base_link of the chassis to define the AI rover’s initial position.

Using the ROSLAUNCH Command

The roslaunch command is used to launch external programs within the ROS environment, such as RViz and Gazebo. We use the roslaunch command to display the URDF files located in the URDF directory in RViz. The roslaunch command automatically starts the roscore master node for every ROS session. The roslaunch configuration file has the .launch file extension and must be located in the launch directory. In the launch directory, create the configuration file using gedit RViz.launch at the command prompt. Add the following lines:
<launch>
   <!-- values passed by command line input -->
   <arg name="model" />
   <arg name="gui" default="False" />
   <!-- set these parameters on Parameter Server -->
   <param name="robot_description"
   textfile="$(find ai_rover)/urdf/ai_rover.urdf” />$
   <param name="use_gui" value="$(arg gui)" />
   <!-- Start 3 nodes: joint_state_publisher,
          robot_state_publisher and rviz -->
   <node name="joint_state_publisher"
      pkg="joint_state_publisher"
      type="joint_state_publisher" />
   <node name="robot_state_publisher"
      pkg="robot_state_publisher"
      type="state_publisher" />
   <node name="rviz" pkg="rviz" type="rviz"
      args="-d $(find ai_rover)/urdf.rviz"
      required="true" />
</launch>
The roslaunch file has the following sections:
  • Import the ai_rover.urdf model.

  • Start the joint_state_publisher, robot_state_publisher, and the RViz 3D CAD environment.

Note

ALL URDF/SDF files must be executable:

$ sudo chmod +rwx RViz.launch

The general format of the roslaunch command is: roslaunch <package_name> <file.launch> <opt_args>, where package_name is the package, file.name is the configuration file, and opt_args are optional arguments needed by the configuration file. To launch our simple chassis , the command is as follows:
$ roslaunch ai_rover RViz.launch model:=ai_rover.urdf
Interpreting this command, “Launch RViz with the ai_rover package using the RViz.launch configuration file , which in turn will use the ai_rover.urdf as the model to run.” The RViz screen should look like Figure 4-9. The small red box in the middle is our “chassis .”

A screenshot of the R Viz window. Below the menu bar, it has options for interact, move camera, and so on. On the left, under displays, it has global options. In the center, it has a chassis on the grid. On the right, it has current view with orbit, r viz. At the bottom it has ROS time, ROS elapsed, wall time, and wall elapsed with a reset button.

Figure 4-9

Simple Rover chassis

If there is no 3D box, examine the Displays panel to determine if RobotModel and TF (model transform) are defined. If not, do the following:
  • Select the Add button and add RobotModel.

  • Select the Add button and add TF.

  • Finally, go to the Global Options ➤ Fixed Frame option and change the value to base_link.

There should now be a box on the main screen. Save your work!

Creating Wheels and Drives

Next, add the 3D links for the wheels and drives to our model. Remember, in ROS, a link is the “physical” structure between “joints.” Joints are where the movement happens. Think of a human skeleton: the shoulder and elbow joints are linked by the radius bone. There are six joint types, which are defined by degrees of freedom (DoF) about the XYZ axis:
  • Planar Joint : This joint allows movement in a plane. An example of this would be an elbow joint. (one DoF: rotate)

  • Floating Joint : This type of joint allows motion in all six DoF (translate, rotate for each axis). An example of a joint such as this would be a wrist.

  • Prismatic Joint : This joint slides along an axis and has a limited upper and lower range of distance to travel. An example of this would be a spyglass telescope. Think pirate telescope. (two DoF: translate and rotate)

  • Continuous Joint : This joint rotates around the axis like the wheels of a car and has no upper or lower limits. (one DoF: rotate)

  • Revolute Joint : This joint rotates around an axis, similar to continuous, but has upper and lower bounds of angles of rotation. For example, a volume knob. (one DoF: rotate)

  • Fixed Joint : This joint cannot move at all. All degrees of freedom are locked. An example would be the static location of a mirror on a car door. (zero DoF)

We need to attach wheels to our chassis , and the correct joint to use is the continuous joint, because wheels rotate 360° continuously. Each wheel can rotate in both forward and reverse directions. To add the wheels to our model, modify the ai_rover.urdf file by adding the bold lines. Save the file after making the edits.
<?XML version='1.0'?>
<robot name="ai_rover">
     <!-- Base Link -->
     <link name="base_link">
          <visual>
               <origin xyz="0 0 0" rpy="0 0 0" />
               <geometry>
                    <box size="0.5 0.5 0.25"/>
               </geometry>
          </visual>
     </link>
     <!-- Right Wheel -->
     <link name="right_wheel">
          <visual>
               <origin xyz="0 0 0" rpy="1.570795 0 0" />
               <geometry>
                    <cylinder length="0.1" radius="0.2" />
               </geometry>
          </visual>
     </link>
     <joint name="joint_right_wheel" type="continuous">
          <parent link="base_link"/>
          <child link="right_wheel"/>
          <origin xyz="0 -0.30 0" rpy="0 0 0" />
          <axis xyz="0 1 0"/>
     </joint>
     <!-- Left Wheel -->
     <link name="left_wheel">
          <visual>
               <origin xyz="0 0 0" rpy="1.570795 0 0" />
               <geometry>
                    <cylinder length="0.1" radius="0.2" />
               </geometry>
          </visual>
     </link>
     <joint name="joint_left_wheel" type="continuous">
          <parent link="base_link"/>
          <child link="left_wheel"/>
          <origin xyz="0 0.30 0" rpy="0 0 0" />
          <axis xyz="0 1 0" />
     </joint>
</robot>
The following are the modifications made to the ai_rover.urdf model :
  • Each wheel has two parts, the link and the joint.

  • The <link> of each wheel is defined as a cylinder with a radius of 0.2 m and a length of 0.1 m. Each wheel is located at (0, ±0.3, 0) and is rotated by π/2 (1.57...) radians or 90 degrees about the x-axis.

  • The <joint> of each wheel defines the axis of rotation as the y-axis and is defined by the XYZ triplet “0, 1, 0”. The <joint> elements define the kinematic (moving) parts of our model, with the wheels rotating around the y-axis.

  • The URDF file is a tree structure with the AI rover’s chassis as the root (base_link), and each wheel’s position is relative to the base link.

Note

Our simplified virtual model’s dimensions are not the same as the physical dimensions of the physical rover. This might cause some issues with training deep learning and cognitive networks. We will discuss these issues in Chapter 12 and beyond.

Verify and launch the modified code. Your Rviz display should be similar to Figure 4-10. If you do not receive a “Successfully Parsed” XML message, review your file for errors, such as spelling and syntax; i.e., forgetting a “>” or using “” instead of “/”.

Note

Always test file correctness after every new component added. For example, if you add the left wheel immediately check the correctness of the XML source code within the URDF file by executing the following:

$ check_urdf ai_rover.urdf

$ roslaunch ai_rover ai_rover.urdf

These two commands (check_urdf and roslaunch ai_rover) should be executed each time the file is modified. We will use “verify and launch” as shorthand for these two commands.

A screenshot of the R Viz window. Below the menu bar, it has options for interact, move camera, and so on. On the left, under displays, it has global options. On the right, it has a rover chassis with the highlighted left wheel on the grid. At the bottom it has ROS time, ROS elapsed, wall time, and wall elapsed with a reset button.

Figure 4-10

Attaching left and right wheels to rover chassis

Creating AI Rover’s Caster

We now have the two wheels successfully attached to the AI rover’s chassis . To mimic the physical GoPiGo rover, we will add a caster on the lower-back bottom of the AI rover’s chassis for “balance.” We could add a powered caster as a joint to add actuated turning, but this is still too complex. Instead, we will add the caster as a visual element and not as a joint. The caster slides along the ground plane as the wheels control the direction.

The highlighted changes in bold made to the ai_rover.urdf file add the caster as a visual element to the AI rover (base_link) chassis. Please note the code for the left and right wheels is collapsed, indicated by “...”, and does not change.
<?xml version='1.0'?>
<robot name="ai_rover">
     <!-- Base Link -->
     <link name="base_link">
          <visual>
               <origin xyz="0 0 0" rpy="0 0 0" />
               <geometry>
                    <box size="0.5 0.5 0.25"/>
               </geometry>
          </visual>
          <!-- Caster -->
          <visual name="caster">
               <origin xyz="0.2 0 -0.125" rpy="0 0 0" />
               <geometry>
                    <sphere radius="0.05" />
               </geometry>
          </visual>
     </link>
     <!-- Right Wheel --> ...
     <!-- Left Wheel --> ...
</robot>
We have modeled the caster as a sphere with a radius of 0.05 m (5 cm or ~2 in). After making these changes to ai_rover.urdf, verify and launch. Your display should look like Figure 4-11. You can see the caster sphere offset at location “0.2, 0, -0.125.”

A screenshot of the R Viz window. Below the menu bar, it has options for interact, move camera, and so on. On the left, under displays, it has global options and add button. On the right, it has an enlarged rover chassis with the highlighted left wheel.

Figure 4-11

The rover chassis with the attached caster

Adding Color to the AI Rover (Optional)

The simple chassis modeled in ai_rover.urdf is constantly modified to reflect new design requirements. For instance, to modify the color of the chassis and wheels, we set the material color. The code in bold has a few interesting points: 1) if you define a color (blue) in the parent link, it affects the sub-links (base_link/castor); 2) if you define a color (black), it can be reused (left/right wheel); 3) each component’s <material> color is located in the <visual> block, which must be inside a <link> block; and 4) the color of the link is a “visual” component. The last point means that the visual component will not affect any dynamic attributes; it is decorative only.
<?XML version='1.0'?>
<robot name="ai_rover">
     <!-- Base Link -->
     <link name="base_link">
          <visual>
               <material name="blue">
                    <color rgba="0 0.5 1 1"/>
               </material>
          </visual>
          <!-- Caster -->
          <visual name="caster">
               <origin xyz="0.2 0 -0.125" rpy="0 0 0" />
               <geometry>
                    <sphere radius="0.05" />
               </geometry>
          </visual>
     </link>
     <!-- Right Wheel -->
     <link name="right_wheel">
          <visual>
               <material name="black">
                    <color rgba="0.05 0.05 0.05 1"/>
               </material>
          </visual>
     </link>
     <!-- Left Wheel -->
     <link name="left_wheel">
          <visual>
               <material name="black"/>
          </visual>
     </link>
</robot>
At the command window, verify and launch. Your RViz display should look something like Figure 4-12.

A screenshot of the R Viz window. Below the menu bar, it has options for interact, move camera, select, and so on. On the left, under displays, it has global options, status, and so on. On the right, it has a rover chassis in a different color with the highlighted left wheel. At the bottom, it has ROS time, ROS elapsed, wall time, and wall elapsed.

Figure 4-12

AI rover chassis color change

Collision Properties

Our simple model is finished enough to define the collision properties for the model—think of a collision property as a “bounding box.” The bounding box is the smallest box/sphere/cylinder that surrounds our model’s components, and the sum of the bounding boxes for the components is the bounding box for the rover. To do this, we add <collision> properties to each component. The collision properties are defined for Gazebo’s collision-detection engine. For each simulation time frame, the components are checked for a collision. Modeling our AI rover as many simple components optimizes collision detection.

The <collision> code properties are identical to the <origin> and <geometry> properties of each component—just copy and paste between <collision>...</collision> tags. The XML source code between the <visual>...</visual> blocks was collapsed in order to save space and highlight the new <collision> blocks:
<?xml version='1.0'?>
<robot name="ai_rover">
     <!-- Base Link -->
     <link name="base_link">
          <visual>...</visual>
          <!-- Box collision -->
          <collision>
               <origin xyz="0 0 0" rpy="0 0 0" />
               <geometry>
                    <box size="0.5 0.5 0.25"/>
               </geometry>
          </collision>
          <!-- Caster -->
          <visual name="caster">...</visual>
          <!-- Caster Collision -->
          <collision>
               <origin xyz="0.2 0 -0.125" rpy="0 0 0" />
               <geometry>
                    <sphere radius="0.05" />
               </geometry>
          </collision>
     </link>
     <!-- Right Wheel -->
     <link name="right_wheel">
          <visual>...</visual>
          <!-- Right Wheel Collision -->
          <collision>
               <origin xyz="0 0 0" rpy="1.570795 0 0" />
          <geometry>
             <cylinder length="0.1" radius="0.2" />
          </geometry>
          </collision>
     </link>
     </joint>
     <!-- Left Wheel -->
     <link name="left_wheel">
          <visual>...</visual>
          <!-- Left Wheel Collision -->
               <collision>
                    <origin xyz="0 0 0" rpy="1.57 0 0" />
               <geometry>
                    <cylinder length="0.1" radius="0.2" />
               </geometry>
          </collision>
     </link>
     </joint>
</robot>

Verify and launch. Since the collision properties affect the dynamic physics, not the looks, you will not see any visual differences! This allows them to “bump” into other objects.

Testing the AI Rover’s Wheels

Now we will test the wheels to see if they can rotate correctly. To perform these tests, we launch a GUI pop-up screen to test the wheel joints. Verify and launch with a small change:
$ check_urdf ai_rover.urdf.
$ roslaunch ai_rover ai_rover.urdf gui:=true

We will call this verify and launch–GUI. We can visualize movement!

Note

If you get a “GUI has not been installed or available” error message, run the following:

$ sudo apt-get install ros-noetic-joint-state-publisher-gui

This forces the GUI to install.

Recall that every time that we execute the RViz.launch file , three specific ROS nodes are launched: joint_state_publisher, robot_state_publisher, and RViz. The joint_state_publisher node maintains a non-fixed joints list, such as the left and right wheels. Every time the left (right) wheel rotates, the joint_state_publisher sends a JointState message from the left (right) wheel to RViz to update the drawing of the left (right) wheel. Since each wheel generates its messages, the wheels rotate independently. After verify and launch–GUI, your display should look like Figure 4-13. Since the wheels are solid black, you cannot see the rotation, so you will need to launch the joint_state_publisher window. The window will display changes to the different wheels as they occur during simulation; set the initial values before simulation and modify values during simulation. These are very powerful debugging tools that you might use frequently.

A screenshot of the R Viz window. Below the menu bar, it has options for interact, move camera, select, focus camera, measure, and 2 D pose estimate. Below them, on the left, it has a joint state publisher window with a joint right wheel, joint left wheel, randomize button, and center button. On the right, it has a rover chassis in the grid.

Figure 4-13

AI rover wheel joint test GUI (joint_state_publisher)

Examining the joint_state_publisher GUI, you should see four items of interest:
  • joint_right_wheel: Set the angle of the wheel between ±π.

  • joint_left_wheel: Set the angle of the wheel between ±π.

  • Randomize: Randomly assign a value between ±π for each independent wheel.

  • Center: Set both wheels to zero radians.

Physical Properties

Notice that our wheels are spinning, but the AI rover chassis is not moving. To see the movement, we need to do two things: add physics properties and run the AI rover in Gazebo. RViz visualizes the components but does not show the physics (movement); we need to add inertial properties (mass and inertia) for each component.

An object’s Inertial is calculated from its weight and how much it resists acceleration or deceleration. For simple objects with geometric symmetry, such as a cube, cylinder, or sphere, the moment of inertia is easy to calculate. Because we modeled the AI rover with simple components, Gazebo’s optimized physics engine quickly calculates the moment of inertia.

This means the chassis, wheels, and caster will all have a unique mass and inertia. Every <Link> element being simulated will also need an <inertial> tag. The two sub-elements of the inertial element are defined as follows:
  • <inertial>
    • <mass>: Weight of the object measured in kilograms.

    • <inertia>: The frame of a 3X3 rotational inertia matrix. The moment of inertia is defined for 3D space.

  • </inertial>

Since the inertia is reflective (x ➔ z is the same as z ➔ x), we only need six elements of the matrix to fully define the moment of inertia. Each component (chassis, wheels, and caster) must have the six-element <inertia> values defined, as highlighted in bold.

IXX

Ixy

Ixz

Ixy

IYY

Iyz

Ixz

Iyz

IZZ

Updating the ai_rover.urdf file with the <inertial> properties for each component gives Gazebo enough information to calculate the <mass> and <inertia> for the entire rover. The modifications to the source code are highlighted in bold:
<?xml version='1.0'?>
<robot name="ai_rover">
   <!-- Base Link -->
   <link name="base_link">
      <visual>   </visual> ....
      <!-- Box collision -->
      <collision>   </collision>
      <inertial>
         <mass value="5"/>
         <inertia ixx="0.13" ixy="0.0" ixz="0.0"
             iyy="0.21" iyz="0.0" izz="0.13"/>
      </inertial>
      <!-- Caster -->
      <visual name="caster">...</visual>
      <!-- Caster Collision -->
            <collision>...</collision>
      <inertial>
         <mass value="0.5"/>
         <inertia ixx="0.0001" ixy="0.0" ixz="0.0"
             iyy="0.0001" iyz="0.0" izz="0.0001"/>
      </inertial>
   </link>
   <!-- Right Wheel -->
   <link name="right_wheel">
      <inertial>
         <mass value="0.5"/>
         <inertia ixx="0.01" ixy="0.0" ixz="0.0"
             iyy="0.005" iyz="0.0" izz="0.005"/>
      </inertial>
   </link>
   <joint name="joint_right_wheel" type="continuous">...</joint>
     <!-- Left Wheel -->
     <link name="left_wheel">
          <inertial>
               <mass value="0.5"/>
               <inertia ixx="0.01" ixy="0.0" ixz="0.0"
                     iyy="0.005" iyz="0.0" izz="0.005"/>
          </inertial>
     </link>
     <joint name="joint_left_wheel" type="continuous"> ...</joint>
</robot>

Each component has been defined with its unique mass and moment of inertia values. Verify and launch–GUI! We should see the same display in RViz and the GUI tester (Figure 4-13).

Gazebo Introduction

The UML component diagram in Figure 4-14 describes the structure of the static, dynamic, and environmental libraries of the RViz and Gazebo programs. This is why we break up our very complex problem the way we do. These are not “classes,” but rather a higher abstraction that helps us organize the “libraries of components” we will need to solve our problem.

An illustration of the libraries of R Viz and Gazebo programs. From the bottom, 3 rectangles labeled static, dynamic, and environmental point to a rectangle labeled Gazebo A I rover stimulation. Static has 3 layers; dynamic, environmental, and Gazebo A I rover stimulation have 2 layers.

Figure 4-14

Object generalization in Gazebo simulator

The URDF file describes the static (color, size, etc.) and dynamic (inertial) properties of the components. Convert the URDF file into the Simulation Description Format (SDF ) file for Gazebo.

We can now import the AI rover model into the Gazebo simulator. We tested the AI rover in the Gazebo physics engine to be certain everything developed is syntactically correct (check_urdf) and operational (roslaunch ). Now, we integrate a simulated differential-drive motor and controller. These sensors are the beginnings of the AI rover’s autonomous navigation. To model the internal mechanisms of our AI rover, we use the urdf_to_graphiz tool. Force the installation of urdf_to_graphiz with:
$ sudo apt-get install liburdfdom-tools.
The urdf_to_graphiz tool generates a PDF file with a logical model of the AI rover hardware (Figure 4-15). The graphical information from the tool organizes the hardware design of the AI rover. The diagram helps us visually understand relationships among the rover components. Figure 4-15 illustrates the hardware relationship from our current ai_rover.urdf model to component geometries. To display the visual model in Figure 4-15, execute the following lines (evince is a PDF reader):
$ urdf_to_graphiz ai_rover.urdf
$ evince ai_rover.pdf

An illustration of A I rover hardware. From the top, the base link points 2 arrows labeled x y z, 0, 0.3, 0; r p y, 0, negative 0, 0 and x y z, 0, negative 0.3, 0; r p y, 0, negative 0, 0 to the joint left wheel and joint right wheel, respectively. The joint left wheel points to the left wheel. The joint right wheel points to the right wheel.

Figure 4-15

AI rover joint wheel connections

Background Information on Gazebo

We will be using the Gazebo simulator for AI rover experimentation . The simulator supplies multiple development and deployment utilities. Typical Gazebo applications are the following:
  • Development of deep learning algorithms

  • Development of control algorithms

  • Simulation of sensor data for LiDAR systems, cameras, contact sensors, proximity sensors, etc.

  • Advanced physics engines via open dynamics engine

Now we are reviewing the actual process of loading the URDF description of the AI rover into Gazebo. We will first test the AI rover model by taking control of the wheels to move, in a limited fashion, the AI rover model within a simulated world with obstacles. This will be done at first without the use of a two-wheeled differential-drive control system . We will develop that later, in the advanced sections of this chapter, by extending our AI rover model to have the independent ability to control its very own continuous wheel joints, graph sensor data, and verify and validate control and deep learning algorithms.

Starting Gazebo

To test whether Gazebo has been installed correctly, we can enter the following Linux terminal command :
       $ Gazebo.

If Gazebo is not installed, refer to Chapter 3.

Every time that Gazebo is run, two different processes are created. The first is the Gazebo Server (gzserver), which is responsible for the overall simulation. The second process is the Gazebo Client (gzclient), which starts the USER GUI used to control the AI rover.

Note

If you execute the $ Gazebo Linux terminal command and get a series of errors or warning messages, you may have previous incarnations of ROS nodes running. Execute the $ rosnode list command to determine if there are any previously running nodes. If there are any ROS nodes still active, simply execute $ rosnode kill -a. This command kills all running ROS nodes. Then, simply run the $ Gazebo command once again. Be certain to always check for any node warning messages.

A successful launch of Gazebo will create a window similar to Figure 4-16.

A screenshot of the Gazebo window. Below the menu bar, it has world, insert, and layers tabs on the left. The world tab has G U I, scene, spherical coordinates, physics, atmosphere, wind, models, and lights. On the right, below the toolbar, it has 3 perpendicular axes in the virtual world. The vertical axis is highlighted with a box.

Figure 4-16

Gazebo screen

There are two main areas: the simulation display window and the tabs panel. The simulation display window is where our generated world (and rover) will be displayed. The toolbar that is located at the very top of the simulation display window symbols controls the simulated world. (Note the little red box; we will come back to this a moment.) The tabs panel has three tabs: World, Insert, and Layers.

The World tab provides hierarchical access to sub-elements, such as GUI, Scene, Spherical Coordinates, Physics, Models, and Lights. While all of these categories are fascinating, at this time we are interested in the Models tab—where our AI rover model resides. We will introduce other categories as needed.

The Insert tab gives access to models developed by us (local) and others (cloud, located at http://gazebosim.org/models ). These models may be inserted into our active world.

The Layers tab allows toggling between different visual parts of our simulated world. We use this to “debug” our world view; for instance, determining if there are any unexpected collisions. The Layers tab initially contains no layers. As we develop our world further, we can add layers.

Gazebo Environment Toolbar

The toolbar is located at the very top of the Gazebo environment . Let’s review the following symbols that can be seen from left to right within the Gazebo toolbar . These symbols have the following capabilities and can also be seen in Figure 4-17.

The following symbols can be seen from left to right within the Gazebo toolbar. See Figure 4-17.

A screenshot of the Gazebo window. Below the menu bar, it has world, insert, and layers tabs on the left. The world tab has G U I, scene, spherical coordinates, physics, atmosphere, wind, models, and lights. In the center, it has a pulled screen with a vertical axis in the virtual world. On the right, it has force, position, and velocity tabs.

Figure 4-17

The Gazebo environment toolbar

  • Selection Mode: This mode selects the 3D AI rover or its components within the Gazebo environment. The properties of the AI rover or its components are listed within the World panel.

  • Translation Mode: This mode selects the AI rover or its components when a cursor is clicked around any part of the AI rover. There will be a 3D box wrapped around the selected component or even the AI rover itself. We can then move any part of the AI rover to any position required.

  • Rotation Mode: This mode is responsible for selecting the AI rover model when a cursor selects and draws a box around it. You can then rotate the AI rover model on either its roll, pitch, or yaw axis.

  • Scale Mode: This mode can select the AI rover sub-components, such as the box component. The scaling operation only works with very simple 3D shapes, such as a cube in the case of the chassis for the AI rover.

  • Undo Command: This will undo the very last action committed by the developer. We can repeat the undo operation to undo a series of actions in a linear format.

  • Redo Command: This likewise will redo the last action that was deleted by the undo command. So it will reverse and restore what was eliminated by the undo command.

  • Box, Sphere, and Cylinder Modes: These next three modes found by their shapes allow one to automatically create these shapes with varying dimensions within the Gazebo environment. The scaling mode can be used to modify the dimensions of these simple shapes.

  • Lighting Mode: This allows one to change the angle and intensity of light within the Gazebo environment .

  • Copy Mode: Copies the selected items within the Gazebo environment.

  • Paste Mode: This mode pastes the copied item onto the Gazebo environment.

  • Selection and Alignment Mode: This mode will align two objects with each other in either the x, y, or z-axis.

  • Join Mode: This mode will allow one to select the location as to where two objects will be joined.

  • Alter View Angle Mode: This mode will allow one to change the angle of view for the user.

  • Screenshot Mode: This mode will take a screenshot of the simulation environment for documentation purposes. All files are saved within the ~/gazebo/pictures directory.

  • Log Mode: This information will take all of the data and simulation values being generated and store them in the ~/gazebo/log directory. This will be used to debug the deep learning routines for the AI rover.

The Invisible Joints Panel

We now revisit the red box in Figure 4-16. Dragging the dotted control to the left accesses the Joints panel —our testing interface for any active model; for instance, our rover. Dragging the control will then create a display similar to Figure 4-18.

A screenshot of the Gazebo toolbar. From the left to right, it has options as follows. Selection mode; translation mode; rotation mode; scale mode; undo command; redo command; box, sphere, and cylinder modes; lighting mode; copy; paste; selection and alignment mode; join; alter view angle mode; screenshot mode; log mode.

Figure 4-18

Gazebo Joints panel screen pulled from right to left

The Joints panel has one Reset button and multiple tabs. The Reset button will return our active model to its initial configuration. The tabs display the active model’s available joints and properties. In our AI rover, the only available joints will be the two wheels. The three tabs are defined as follows:
  • Force: Defined as a force in Newtons per meter (N/m) applied to each continuous joint.

  • Position: <x,y,z> 3D coordinates and <roll,pitch,yaw> rotation.

  • Velocity: Speed of Joint in meters per second (m/s). These can also be set by the PID values.

The Gazebo Main Control Toolbar

Now that we have reviewed the toolbar that is used to control the shapes, dimensions, and operations that occur within a Gazebo simulation, we must review the basic controls of the controlling toolbar that is located at the far-most-top of the Gazebo environment itself. This basic controlling toolbar contains the File, Edit, Camera, View, Window, and Help options that would be expected to be found in any modern GUI environment. We will now review each of the following basic functionalities found within the controlling toolbar of the Gazebo environment:
  • File has the sub-functions of Save World, Save World As, Save Configuration, Clone World, and Quit.

  • Edit has the sub-functions of Reset Model Poses, Reset World, Building Editor, and Gazebo Model Editor.

  • Camera has the sub-functions of Orthographic, Perspective, FPS View Control, Orbit View Control, and Reset View Angle.

  • View has the sub-functions of Grid, Origin, Transparent, Wireframe, Collisions, Joints, Center of Mass, Inertias, Contacts, and Link Frames.

  • Window has the sub-functions of Topic Visualization, Oculus Rift Virtual Reality Viewer, Show GUI Overlays, Show Toolbars, and Full Screen.

  • Help has the sub-functions of Hot Key Chart and Gazebo About.

Now that we have reviewed the controlling toolbar functions, we must transition how we can run simulations and play back simulation runs. We must be able to modify the URDF file of the AI rover into a form that is compatible with the Gazebo simulation environment by transforming that very same URDF file into an SDF (Simulation Defined Format) file.

URDF Transformation to SDF Gazebo

Now we must transform the AI rover’s URDF file so that it can be accepted and processed by the Gazebo environment . We must convert the URDF to an SDF file . We must state to the reader that the SDF expression is an extension to that of URDF, by using the same XML extensions provided. By making the appropriate modifications of the URDF file describing the AI rover, it will allow the Gazebo environment to convert the URDF to the required SDF robot expression for the AI rover. We will now describe the required steps to transform URDF files into SDF files .

To allow this transformation to be complete, we must add the correct <Gazebo> tags to the URDF file that describes the AI rover chassis, wheels, and caster within the Gazebo simulator. It should be stated that the chassis of the AI rover not only includes the physical box of the AI rover but would also include the mass and moment of inertia of the embedded electronics, such as the Raspberry Pi . The use of the <Gazebo> tag allows one to transform elements found in SDF but not in URDF. If a <Gazebo> tag is used without a reference="" property, then the <Gazebo> tag is concerned with the entire AI rover model. The reference parameter usually refers to joints such as the wheels defined within the AI rover URDF file. We can also define both links and joints found within SDF that are not found within the URDF file describing the AI rover. With this extension found within SDF , we can develop sophisticated simulations of a deep learning controller controlling the AI rover within a Gazebo environment. We will review in this and the next chapter some of the tutorials found within http://gazebosim.org/tutorials/?tut=ros_urdf for a list of elements such as links and joints that can be used to even further enhance the simulations of the AI rover. Examples of links and joints would be the fixed sensors and dynamic actuators for the AI rover.

Not only can we define the links and joints within Gazebo, but we can also define and specify the color within Gazebo. However, we have to make modifications in Gazebo that are different than the AI rover model definitions that were defined within Rviz. For example, we cannot reuse the references defined for the color of the components. As such, we must add a Gazebo <material> for each link. The Gazebo tags can be placed in the AI rover model before the ending </robot> tag, as follows:
<gazebo reference="base_link">
     <material>Gazebo/Blue</material>
</gazebo>
<gazebo reference="right_wheel">
     <material>Gazebo/Black</material>
</gazebo>
<gazebo reference="left_wheel">
     <material>Gazebo/Black</material>
</gazebo>

Gazebo tags would have to be defined before the ending </robot> tag for the entire model for the AI rover. Therefore, all Gazebo tags should be defined near the end of the file before the ending </robot> tag. However, there are caveats with the other elements in Gazebo.

The Gazebo simulator will use neither the <visual> nor the <collision> elements if they are not specified for each link, such as the AI rover 3D chassis box or the caster. If links such as these are not specified, then Gazebo will regard them as being invisible to sensors such as lasers and simulated environment collision checking.

Checking the URDF Transformation to SDF Gazebo

Just like we had to verify and validate the URDF files earlier in this chapter with the check_urdf tool found within Noetic ROS, we will also have to re-examine the URDF files that have been upgraded with the <Gazebo> extension tags. We need to do this to determine if any errors within the URDF files have the <Gazebo> extension tags that indeed need to be transformed into the SDF files required for exporting the AI rover model to the Gazebo simulation environment. We will now extend the AI_Rover.urdf file with the name extension of AI_Rover_Gazebo.urdf. The name extension is to designate that this file is to be used for Gazebo simulations for the AI rover. The designated tool that is used to verify the URDF <Gazebo> extensions that allow the URDF file to be transformed into an SDF file by and for Gazebo is the $ gz sdf toolset. Two commands are needed:
$ gz sdf -p ai_rover.gazebo
Or for an entire directory, search for gazebo extension files:
$ gz sdf -p $(rospack find ai_robotics)/urdf/ai_rover.gazebo

We will first test to determine if the Gazebo references work for the color schemes for the chassis and wheels. We will also use Gazebo references to develop the differential drive controller for the AI rover itself by this chapter’s end.

Now that we’ve reviewed the basics of utilizing the verification process for a URDF file that has <gazebo> extension tags, these same extension tags must be placed between the <link> and <joint> tags of each component, such as the base_link and both wheels for the AI rover. We must review an example of our AI rover URDF file that has the <gazebo> extension tags. The examples are highlighted in bold as follows:
<?XML version='1.0'?>
<robot name="ai_rover">
    <!-- Base Link -->
    <link name="base_link">
    </link>
        <gazebo reference="base_link">
                <material>Gazebo/Blue</material>
        </gazebo>
    <!-- Right Wheel -->
    <link name="right_wheel">
    </link>
    <gazebo reference="right_wheel">
        <material>Gazebo/Black</material>
    </gazebo>
    <joint name="joint_right_wheel" type="continuous">
    </joint>
    <!-- Left Wheel -->
    <link name="left_wheel">
    </link>
    <gazebo reference="left_wheel">
        <material>Gazebo/Black</material>
    </gazebo>
    <joint name="joint_left_wheel" type="continuous">
    </joint>
</robot>

Once we have this URDF file with the first Gazebo extensions created, we must then convert this to an SDF file , to be certain that there are no issues with the transformation process for processing by Gazebo. We will then execute the following command: $ gz sdf -p ai_rover Gazebo. Once we execute this command in the correct directory, we should have a terminal prompt listing of the correct and equivalent SDF file being generated with no printed errors. Once we have reached this point of generating an SDF file , we must now develop the required launch and simulation files for starting our initial ROS simulation within Gazebo.

First Controlled AI Rover Simulation in Gazebo

As we are developing our first controlled AI rover simulation within Gazebo, we must develop two files that separate two steps for creating this simulation environment. We must first develop the launch file to launch and view the AI rover, the environment, and any obstacles or mazes presented within the simulated environment. The second file will describe what the Gazebo simulation world will contain, such as mazes, obstacles, and dangers. We should note that the second Gazebo simulation file will likewise be launched by the first launch file developed. We should also be aware that the launch file should be located within the launch directory and the Gazebo obstacle simulation file should be located within the worlds directory, all of which are sub-directories to the ai_robotics directory.

Therefore, this launch file will be launching the empty world as follows:
<launch>
  <!-- We use ROSLAUNCH AND empty_world.launch, -->
  <include file="$(find gazebo_ros)/launch/empty_world.launch">
    <arg name="world_name" value="$(find   ai_robotics)/worlds/ai_rover.world"/>
    <arg name="paused" default="false"/>
    <arg name="use_sim_time" default="true"/>
    <arg name="gui" default="true"/>
    <arg name="headless" default="false"/>
    <arg name="debug" default="false"/>
  </include>
  <!-- Spawn ai_rover into Gazebo -->
<node name="spawn_urdf" pkg="gazebo_ros" type="spawn_model" output="screen"
     args="-file $(find ai_robotics)/urdf/ai_rover.gazebo-urdf -model ai_rover"/>
</launch>
We will now need to create the sub-directory worlds for the rover. This can be done with the following terminal commands:
$ cd ~/catkin_ws/src/ai_robotics
$ mkdir worlds
$ cd worlds

This launch file will launch the empty worlds that are contained within the gazebo_ros package. We can also develop a world that will contain the Egyptian catacomb layout by replacing the ai_rover.world file. The URDF with the <gazebo> extension tags model of the AI rover will be launched within the empty worlds by the spawn_model service from the gazebo_ros Noetic ROS node.

Now that we have the worlds directory created, we can begin to develop the SDF that will become the ai_rover.world file that is launched with the aforementioned launch file and includes additional items, such as construction cone obstacles . Therefore, we will closely examine the ai_rover.world file that describes the ground plane, the light source (sun), and the two separated construction cones. The source code for the ai_rover.world is the following:
<?XML version=”1.0”?>
<sdf version="1.4">
<world name="default">
<include>
<uri>model://ground_plane</uri>
</include>
<include>
<uri>model://sun</uri>
</include>
<include>
<uri>model://construction_cone</uri>
<name>construction_cone</name>
<pose>-3.0 0 0 0 0 0</pose>
</include>
<include>
<uri>model://construction_cone</uri>
<name>construction_cone</name>
<pose>-3.0 0 0 0 0 0</pose>
</include>
</world>
</sdf>
We can always modify this file to include more construction cones and other obstacles by modifying and using the <include>, <uri>, <name>, and <pose> tags. The <include> tag allows us to include an additional model, such as a construction cone. The <uri> model identifies what the model is, such as a construction cone . The <name> tag identifies the name of the obstacle. The <pose> tag represents a relative coordinate transformation between a frame and its parent. Link frames were always defined relative to a model frame, and joint frames relative to their child link frames. Now we can execute the ai_rover_gazebo.launch file by executing the following command:
$ roslaunch ai_robotics ai_rover_gazebo.launch
Once you execute this terminal command, you should have the display shown in Figure 4-19.

A screenshot of the Gazebo window. Below the menu bar, it has world, insert, and layers tabs on the left. The world tab has G U I, scene, spherical coordinates, physics, atmosphere, wind, models, and lights. On the right, below the toolbar, it has a rover in the center with 3 perpendicular axes. It has 2 construction cones on the left and right.

Figure 4-19

First AI rover Gazebo simulation

First Deep Learning Possibility

Now that we have developed the first AI rover Gazebo simulation setup , we must experiment to explore methods to cause locomotion, and then eventually intelligent navigation, obstacle avoidance, and ultimately sense-and-avoid cognitive capabilities. The first use of a deep learning controller might take the form of controlling any type of unexpected behavior of the AI rover within Gazebo. Unexpected behavior would include not traveling within a straight line while navigating obstacles. This is because the URDF file with the <Gazebo> extension tags might need further tuning to represent the physics within Gazebo. We might need to develop an intelligent and adaptive deep learning controller that controls the AI rover. We might need to modify properties such as the mass distribution and moment of inertia values for the AI rover. If these values were constantly changing, we would need a controller that would likewise adapt accordingly.

Moving the AI Rover with Joints Panel

Now we should try to test the underlying physics engine that is provided within Gazebo. The one effective way to accomplish this is to cause the model of the AI rover to move within Gazebo. Therefore, to test the physics engine for the AI rover we must use the joint control by using the Joints panel. The Joints panel is located to the right of the Gazebo environment . We need to be in selection mode, which we can do by clicking onto the AI rover model, which will be highlighted with a white outline box. Once the white box has been indicated, we can see the values for the joint_left_wheel and the joint_right_wheel displayed within the Force tab of the Joints panel. We need to input very small values, such as 0.00050 Newton-Meters for the joint_left_wheel and 0.00002 Newton-Meters for the joint_right_wheel. We should then see our AI rover prototype move in an arching pathway. We should try to collide the AI rover with one of the construction cones . We do this to test the collision tags found within the AI Rover URDF file to see if they work. We can see that the collision tags do indeed work by examining the crash display shown in Figure 4-20.

An illustration of an A I rover crashed with a construction cone on the grid.

Figure 4-20

First AI rover crash with construction cone

Summary

We have achieved a lot within the pages of Chapter 4. We have reviewed how to develop a model for the AI rover with URDF. We have shown how to extend a URDF file with <Gazebo> tags to allow for Gazebo simulations. We have evaluated the functionality of the 3D environment for designing models such as Rviz. We have reviewed the process of developing and deploying models created in Rviz to Gazebo. We have worked with multiple ROS commands to launch these simulations. We will see in Chapter 5 how we can use the XML macro (Xacro) languages to develop even more sophisticated AI rover simulations, by allowing the AI rover, sensors, actuators, and simulated environments to be developed more efficiently. We will also use more examples of UML modeling for these very same Xacro files.

Extra Credit

Exercise 4.1: What additional changes would you make to the ai_rover.world file to include obstacles other than the construction cones?

Exercise 4.2: What additional changes would you make to spawn an additional number of construction cones within the ai_rover.world? How can you place them differently or symmetrically, etc.?

Exercise 4.3: How does the use of the Joints panel highlight the need for a controller and driver for the differential-wheeled system? Why can we not develop a differential driver within Rviz?

Exercise 4.4: Why do we need to verify and validate both the URDF and SDF files being developed with tools such as check_urdf?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.79.59