In my previous post, here, I showed how to create multiple instances of a worker for distributed computation. This works well for pleasantly parallel algorithms, such as genetic algorithms (GAs). Unfortunately, it has taken me quite a while to write this follow on post as I ran into some challenging issues when moving from a simple adding service to ROS/Gazebo and robot simulation.
Simple Roomba like robot with laser scanner in ROS/Gazebo.
ROS/Gazebo
The Robot Operating System (ROS) and Gazebo work together to simulate robotic systems without the need for a physical platform. ROS itself runs as a distributed system of nodes, each handling one function of a robotic system (controller, sensors, actuators, etc). Typically, a ROS master is created in turn launching the nodes related to a robot, as well as a Gazebo node for conducting a simulation. Under normal circumstances one robot/environment can be simulated at one time. This presents a challenge for GAs as we typically want, and often need for practical purposes, the ability to distribute simulations across many instances.
This can be difficult to overcome as I have not found a way to launch more than one ROS master on a machine at each time. A few possible solutions exist. The simplest is perhaps spooling up multiple VMs with a GA sending genomes over a network connection and receiving fitnesses back from distributed workers. However, this has a potentially high computational cost as each VM will require a significant amount of resources. Another solution is to use the namespace
tag within ROS to create multiple instances. However, this is not stable within every version of ROS. For this post, I am using code that works within ROS Indigo/Gazebo 7. Newer distributions of Kinetic should support this, but more discussion on the problem can be seen here and here.
Namespacing
A typical roslaunch file specifies the nodes needed to launch a simulation. It might look like the following:
<launch>
<!-- Code omitted for brevity -->
<!-- start gazebo server-->
<env name="GAZEBO_MASTER_URI" value="http://localhost:11345"/>
<node name="gazebo" pkg="gazebo_ros" respawn="$(arg respawn_gazebo)"
output="screen" args="$(arg command_arg1) $(arg command_arg2) $(arg command_arg3) -e $(arg physics)
$(arg extra_gazebo_args) $(arg world_name)">
</node>
<!-- Load the transporter node -->
<node name="basicbot_transporter" pkg="basicbot_ga" type="basicbot_transporter.py" output="screen">
</node>
<!-- Load the turn_drive_scan node -->
<node name="turn_drive_scan" pkg="basicbot_ga" type="turn_drive_scan_node.py" output="screen">
</node>
<!-- Load the step world node -->
<node name="step_world" pkg="world_step" type="step_world_server">
</node>
<!-- Load the URDF into the ROS Parameter Server -->
<param name="robot_description"
command="$(find xacro)/xacro '$(find basicbot_description)/urdf/basicbot.xacro'" />
<!-- Run a python script to the send a service call to gazebo_ros to spawn a URDF robot -->
<node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" respawn="false" output="screen"
args="-urdf -model basicbot -param robot_description">
</node>
</launch>
Note that some code has been removed for brevity. Full source for this can be seen here. Each component of the robot and plugins associated with the world are started within their own node. For example, the controller for this simple robot is in the turn_drive_scan
node. A world plugin to control when a simulation is stepped is placed in the step_world
node. All of these nodes exist within the default namespace of the ROS master. Adding another Gazebo instance would require duplicating these nodes with new names, perhaps basicbot_transporter_2
, turn_drive_scan_2
etc. While this may appear to work, the issue arises when the nodes in a particular instance try to communicate with each other. By ROS convention, they use named channels, called topics, within the ROS master to pass information back and forth. Without custom names, collisions occur.
The launch file specification provides a group
tag to facilitate grouping nodes together. This allows for nodes to be placed in their own space eliminating the naming conflicts easily. Code for this can be seen below and is available here.
<launch>
<!-- Code omitted for brevity. -->
<!-- <group ns="basicbot_ga_1"> -->
<group ns="ga1">
<remap from="/clock" to="/ga1_clock"/>
<!-- start gazebo server-->
<env name="GAZEBO_MASTER_URI" value="http://localhost:11345"/>
<node name="gazebo" pkg="gazebo_ros" type="$(arg script_type)" respawn="$(arg respawn_gazebo)"
args="$(arg command_arg1) $(arg command_arg2) $(arg command_arg3) -e $(arg physics)
$(arg extra_gazebo_args) $(arg world_name)">
</node>
<!-- Load the URDF into the ROS Parameter Server -->
<param name="robot_description"
command="$(find xacro)/xacro '$(find basicbot_description)/urdf/basicbot.xacro'" />
<!-- Run a python script to the send a service call to gazebo_ros to spawn a URDF robot -->
<node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" respawn="false" output="screen"
args="-urdf -model basicbot -param robot_description -gazebo_namespace /ga1/gazebo"/>
<!-- Load the transporter node -->
<node name="basicbot_transporter" pkg="basicbot_ga" type="basicbot_transporter.py" output="screen"></node>
<!-- Load the turn_drive_scan node -->
<node name="turn_drive_scan" pkg="basicbot_ga" type="turn_drive_scan_node.py" output="screen">
<remap from="clock" to="ga1_clock"/>
</node>
<!-- Load the step world node -->
<node name="step_world" pkg="world_step" type="step_world_server"><remap from="clock" to="ga1_clock"/></node>
</group>
<group ns="ga2">
<remap from="/clock" to="/ga2_clock"/>
<!-- start gazebo server-->
<env name="GAZEBO_MASTER_URI" value="http://localhost:11346"/>
<node name="gazebo" pkg="gazebo_ros" type="$(arg script_type)" respawn="$(arg respawn_gazebo)"
args="$(arg command_arg1) $(arg command_arg2) $(arg command_arg3) -e $(arg physics)
$(arg extra_gazebo_args) $(arg world_name)">
</node>
<!-- Load the URDF into the ROS Parameter Server -->
<param name="robot_description"
command="$(find xacro)/xacro '$(find basicbot_description)/urdf/basicbot.xacro'" />
<!-- Run a python script to the send a service call to gazebo_ros to spawn a URDF robot -->
<node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" respawn="false" output="screen"
args="-urdf -model basicbot -param robot_description -gazebo_namespace /ga2/gazebo"/>
<!-- Load the transporter node -->
<node name="basicbot_transporter" pkg="basicbot_ga" type="basicbot_transporter.py" output="screen"></node>
<!-- Load the turn_drive_scan node -->
<node name="turn_drive_scan" pkg="basicbot_ga" type="turn_drive_scan_node.py" output="screen"></node>
<!-- Load the step world node -->
<node name="step_world" pkg="world_step" type="step_world_server"/>
</group>
<!-- Code omitted for brevity. -->
</launch>
By wrapping the previous code in group
tags, we have created two instances of Gazebo to run two robots in parallel in completely separate environments. What you may notice is that there is an extra line or two added above the Gazebo nodes just under the group
tag.
While most topics are automatically remapped to their own namespaces (“ga1”, “ga2”), the /clock
topic is not. Without remapping explicitly, each simulation would publish and read from the shared /clock
leading to problems when running multiple instances as the time in each simulation would move forward, and potentially backward!
The line below the clock remapping is also required as by default Gazebo runs at the same default port. Each instance of Gazebo needs to be set to its own port address. <env name="GAZEBO_MASTER_URI" value="http://localhost:11346"/>
sets a new port for a particular instance of Gazebo. Change the address accordingly for each grouped instance that you launch.
Putting It All Together
After downloading the repository from here, run catkin_make
within the base directory and launch the simulation by opening two terminals. In the first, launch the ROS master with the command: roslaunch basicbot_ga basicbot_ga.launch
. Once the processes have spawned, in another terminal, navigate to the src/basicbot_ga/test
directory and run: python non_ga_server.py
. This will launch the process that distributes genomes to each instance of Gazebo.
Remapped topics. Note the same topics repeated but prepended with the group tag. `/ga1/odom`, `/ga2/odom` etc.
You can view the remapped topics by typing rostopic list
from within a terminal that has sourced the setup file located in devel/setup.sh
. Output should be something similar to this:
If you are curious, you can open other terminals, run export GAZEBO_MASTER_URI=http://localhost:XXXXX
where XXXXX
is the port of one of the groups. gzclient
will then launch a GUI showing you the robot moving about the environment. You should see something like this if you launch gzclients for each of your groups.
Running multiple instances of Gazebo within one ROS master using group tags and remapping.
Summary
In summary, the keys to realizing multiple Gazebo instances within one ROS master are:
- Right version of Gazebo/ROS (Kinetic currently has issues.)
- Grouping nodes with the
group
tag.
- Remapping clock explicitly for each group.
- Exporting unique ports for each Gazebo instance.
Software Used:
- Ubuntu 14.04
- ROS Indigo
- Gazebo 7
- VMWare Fusion