Skip to main content
Skip table of contents

Nav2 implementation

Installation

There are two ways to use the Vision-RTK2 for Nav2:

  1. Assume the Vision-RTK2 is the only localization sensor on your platform and skip the EKF/UKF methods.

  2. Mix the Vision-RTK2 outputs with other sensors using EKF/UKF methods but assume the Vision-RTK2 is the most accurate one (set as non-differential).


As indicated in Transformation requisites of Nav2 (https://docs.nav2.org/setup_guides/transformation/setup_transforms.html#transforms-in-navigation2), the Fixposition ROS driver outputs ‘odom' and 'map’ frames following the REP 103 and REP 105 conventions, where the following transformation is available:

earth → map → odom → base_link

Here is an introductory explanation of the relevant frames:

  • 'odom':

    • World-fixed frame, where the pose of a mobile platform in the odom frame can drift over time, without any bounds. This drift makes the odom frame useless as a long-term global reference. However, the pose of a robot in the odom frame is guaranteed to be continuous, meaning that the pose of a mobile platform in the odom frame always evolves in a smooth way, without discrete jumps.

    • In a typical setup the odom frame is computed based on an odometry source, such as wheel odometry, visual odometry or an inertial measurement unit. The odom frame is useful as an accurate, short-term local reference, but drift makes it a poor frame for long-term reference.

    • The odom => base_link is usually published by our odometry system using sensors such as wheel encoders. This is typically computed via sensor fusion of odometry sensors (IMU, wheel encoders, VIO, etc) using the robot_localization package.

  • ‘map':

    • World-fixed frame, with its Z-axis pointing upwards. The pose of a mobile platform, relative to the map frame, should not significantly drift over time. The map frame is not continuous, meaning the pose of a mobile platform in the map frame can change in discrete jumps at any time.

    • In a typical setup, a localization component constantly re-computes the robot pose in the map frame based on sensor observations, therefore eliminating drift, but causing discrete jumps when new sensor information arrives.

    • The map frame is useful as a long-term global reference, but discrete jumps in position estimators make it a poor reference frame for local sensing and acting.

    • When defining coordinate frames with respect to a global reference like the earth:

      • The default should be to align the x-axis east, y-axis north, and the z-axis up at the origin of the coordinate frame.

      • If there is no other reference the default position of the z-axis should be zero at the height of the WGS84 ellipsoid.

    • The first transform map => odom is usually provided by a different ROS package dealing with localization and mapping such as AMCL.

Based on this understanding, it is clear to see that the ‘odom’ frame corresponds to the FP_A-ODOMSH message (equivalently ‘/fixposition/odometry_smooth’) and the ‘map’ frame corresponds to the FP_A-ODOMETRY message (equivalently ‘/fixposition/odometry_ecef’).


A

It is not necessary to use the navsat_transform_node, as the Vision-RTK2 already provides an ENU output and the user can select the datum for the ECEF->ENU transformation in the Web Interface.

We suggest using a rolling setup for the global costmap, as outdoors environments can get quite big, to a degree that it may not be practical to represent them on a single costmap and given the FP_A-TF_ECEFENU0 message we have a fixed datum so the GPS coordinates have consistent Cartesian representation.

CODE
global_costmap:
  global_costmap:
    ros__parameters:
      ...
      rolling_window: True
      width: 50
      height: 50

Frames

image-20241223-100300.png

Navsat_transform_node: https://docs.ros.org/en/jade/api/robot_localization/html/navsat_transform_node.html

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.