Skip to main content
Skip table of contents

Nav2 implementation

There are two ways of using the Vision-RTK2 with Nav2:

  1. (Recommended) The Vision-RTK2 as the only localization sensor on the platform. This method skips the EKF/UKF blocks that fuse multiple data sources and the driver directly generates the TF tree.

  2. (Work in progress) Fuse the Vision-RTK2 measurements with other sensors (e.g., LiDAR) using the EKF/UKF blocks, but assuming that the Vision-RTK2 is the most accurate one (i.e., non-differential).


Under this mode, the Vision-RTK2 is assumed to be the only localization source, and a corresponding TF tree is generated by the Fixposition ROS driver following the requisites of Nav2 (see Setting Up Transformations — Nav2 documentation for more information). The driver outputs 'odom' and 'map' frames following the REP 103 and REP 105 conventions. Thus, the following transformation chain is generated:

earth → map → odom → base_link

Here is an introductory explanation for the relevant frames:

  • 'odom':

    • World-fixed frame, where the pose of the platform can drift over time, without any bounds. The pose of the robot in the odom frame is guaranteed to be continuous, meaning that the pose of a mobile platform in the odom frame always evolves in a smooth way, without discrete jumps.

    • The odom frame is useful as an accurate, short-term local reference, but drift makes it a poor frame for long-term reference. It is usually computed from an odometry source (e.g., IMU, VO, WS).

  • 'map':

    • World-fixed frame, where the pose of the robot should not significantly drift over time. This frame is not continuous, meaning the pose can present discrete jumps at any time.

    • The map frame is useful as a long-term global reference, but discrete jumps in position estimators make it a poor reference frame for local sensing and acting.

    • A localization component constantly re-computes the robot pose in the map frame based on sensor observations, therefore eliminating drift, but causing discrete jumps.

    • When defining coordinate frames with respect to a global reference like the earth, the default procedure should be to align the origin to the ENU coordinate system. If there is no other reference, the default position of the z-axis should be zero at the height of the WGS84 ellipsoid.

Based on this understanding, it is clear to see that the ‘odom' frame corresponds to the FP_A-ODOMSH message in ENU coordinates (equivalently ‘/fixposition/odometry_smooth_enu') and the ‘map' frame corresponds to the FP_A-ODOMENU message (equivalently '/fixposition/odometry_enu').

To use the Vision-RTK2 with Nav2, the following changes must be applied:

  1. Enable the following messages in the I/O configuration page of the sensor (see I/O messages):

    • FP_A-ODOMETRY

    • FP_A-ODOMENU

    • FP_A-ODOMSH

    • FP_A-LLH

    • FP_A-EOE_FUSION

    • FP_A-TF_VRTKCAM

    • FP_A-TF_POIVRTK

    • FP_A-TF_ECEFENU0

    • FP_A-TF_POIPOISH

    • FP_A-TF_POIIMUH

  2. In the configuration file of the ROS driver (fixposition_driver_ros2/launch/config.yaml), change the 'nav2_mode' to 'true' and 'qos_type' to 'default_long'.

  3. Run the 'setup_ros_ws.sh' bash script to set up the fixposition driver accordingly.

  4. (Optional) Configure wheelspeed measurements by setting up the corresponding configuration:

YAML
converter:
    enabled: true
    topic_type: "Odometry"     # Supported types: nav_msgs/{Twist, TwistWithCov, Odometry}
    input_topic: "/odom"       # Input topic name
    scale_factor: 1000.0       # To convert from the original unit of measurement to mm/s (note: this must be a float!)
    use_x: true                # Transmit the x axis of the input velocity
    use_y: false               # Transmit the y axis of the input velocity
    use_z: false               # Transmit the z axis of the input velocity

Once the Fixposition ROS driver with Nav2 support is running, the following TF tree will be generated:

frames-nav2.png

The Vision-RTK2 must be fully initialized and all the I/O messages mentioned in this guide enabled for the TF tree to be connected. Otherwise, the TF tree will be incomplete or separated.

The URDF file must handle the ‘vrtk_link’ → ‘base_link’ transformation. Without it, the TF tree will not be fully connected. The root of the robot’s TF tree should be the ‘vrtk_link’.

With the Fixposition ROS driver correctly configured and the TF tree updated for Nav2 usage, the user must simply connect the ‘/odom' and '/map’ frames to the desired Nav2 blocks, such as waypoint navigators, obstacle avoidance and path planning algorithms, among others. The localization outputs in both frames will be handled directly by the ROS driver and the user only needs to focus on the navigation and control aspects of the system.

As an example implementation, we connected the Vision-RTK2 with an Scout robot using the Fixposition ROS driver. You can find more detailed in the repository: https://github.com/fixposition/nav2_tutorial.

Note that the provided URDF file generates the necessary ‘vrtk_link’ transformation and imports the TF tree of the Scout robot. For other robots, you can simply replace the import line with your corresponding platform’s URDF file.

  • Since the Vision-RTK2 is already doing the heavy lifting by fusing GNSS, IMU, and camera data into a reliable, smooth estimate, we recommend bypassing the traditional navsat_transform_node. Instead, you can write a custom node (or modify an existing one) that converts your sensor’s ECEF coordinates to your desired local coordinate frame (e.g., ENU) that’s consistent with your robot’s world frame.

  • It’s necessary to change the QoS settings to 'default_long'.

  • When running the ROS driver, the ‘vrtk_link’ should be in the middle of the TF tree, as the ROS driver handles the ‘FP_ECEF’ → ‘vrtk_link’ transformation and the robot's TF tree the transformations starting from ‘vrtk_link’ (e.g., to ‘base_link’ or ‘base_footprint’).

  • If the user observes separate TF trees, the most likely culprit is having 2 publishers (i.e., source frames) for the same child TF (e.g., ‘odom’). For example, if your rover's TF tree generates an ‘odom' frame, then there would be a conflict with the 'odom’ frame generated by the ROS driver, splitting the TF tree.

  • This method only works assuming that the VRTK2 is the only localization sensor in your system. Our provided example assumes no Nav2 EKF modules are spawned, as these also generate 'map' and 'odom' frames by fusing the data from different sources, causing the aforementioned conflicts.

  • The ‘/fixposition/datum’ topic it will not publish anything unless the Fusion engine is fully initialized (i.e., you obtained the first RTK-fixed measurement). There is no need to be outside after that; but obtaining the first fix is crucial to generate the ENU0 frame.

  • The datum message is used to ensure that your local coordinates (metric in a tangential plane to the earth) are built with respect to the sensor’s ENU0 frame. Otherwise, any position commands the user sends will not align with the sensor's position estimate (/odometry_enu), as it would try to use the first position message as 0, which is incorrect and would cause conflict if you save a trajectory (in metric coordinates instead of LLH) once the node is restarted. Using the sensor’s datum message, you can ensure that your local coordinates are always the same if you set the same ENU0 on the sensor.

  • To connect the 'odom' frame of the Vision-RTK2 to your robot, you should create a 'vrtk_link' frame in your URDF file which takes care of spawning the 'vrtk_link' → 'base_link' transformation. Once this link exists, the TF tree will automatically be able to assess the transformation 'odom' → 'base_footprint', as it can evaluate the chain 'odom' → 'vrtk_link' → 'base_link' → 'base_footprint'. In addition, please verify that the order of the transformation 'base_link' → 'base_footprint' is correct in your URDF.

  • Note that ‘base_footprint' is only used when working with a gridmap / occupancy map to avoid obstacles. It is not strictly necessary for navigation, and thus skipped in the provided example. By default, there is no ‘odom’ → 'base_footprint’ transformation on the ROS driver.


This method is still under work in progress. This section will be updated over time.

  • The odom->base_link transformation is usually computed via sensor fusion of odometry sensors (e.g., IMU, wheel encoders, VIO, etc) using the robot_localization package.

  • The map->odom transformation is usually provided by a different ROS package dealing with localization and mapping such as AMCL. For GNSS-based localization, we suggest following this guide: Navigating Using GPS Localization — Nav2 documentation.

  • It is not necessary to use the navsat_transform_node, as the Vision-RTK2 already provides an ENU output and the user can select the datum for the ECEF->ENU transformation in the Web Interface or the ‘/datum’ topic published by the Fixposition ROS driver (see navsat_transform_node documentation for more information).

  • We suggest using a rolling setup for the global costmap, as outdoors environments can get quite big, to a degree that it may not be practical to represent them on a single costmap. In addition, the sensor reports a fixed datum via the FP_A-TF_ECEFENU0 message. Thus, the GPS coordinates have a consistent Cartesian representation.

YAML
global_costmap:
  global_costmap:
    ros__parameters:
      ...
      rolling_window: True
      width: 50
      height: 50

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.