Nav2 implementation
Introduction
There are two ways of using the Vision-RTK2 with Nav2:
(Recommended) The Vision-RTK2 as the only localization sensor on the platform. This method skips the EKF/UKF blocks that fuse multiple data sources and the driver directly generates the TF tree.
(Work in progress) Fuse the Vision-RTK2 measurements with other sensors (e.g., LiDAR) using the EKF/UKF blocks, but assuming that the Vision-RTK2 is the most accurate one (i.e., non-differential).
Method 1: Vision-RTK2 as the only localization sensor
Under this mode, the Vision-RTK2 is assumed to be the only localization source, and a corresponding TF tree is generated by the Fixposition ROS driver following the requisites of Nav2 (see Setting Up Transformations — Nav2 documentation for more information). The driver outputs 'odom' and 'map' frames following the REP 103 and REP 105 conventions. Thus, the following transformation chain is generated:
earth → map → odom → base_link
Here is an introductory explanation for the relevant frames:
'odom':
World-fixed frame, where the pose of the platform can drift over time, without any bounds. The pose of the robot in the
odom
frame is guaranteed to be continuous, meaning that the pose of a mobile platform in theodom
frame always evolves in a smooth way, without discrete jumps.The
odom
frame is useful as an accurate, short-term local reference, but drift makes it a poor frame for long-term reference. It is usually computed from an odometry source (e.g., IMU, VO, WS).
'map':
World-fixed frame, where the pose of the robot should not significantly drift over time. This frame is not continuous, meaning the pose can present discrete jumps at any time.
The
map
frame is useful as a long-term global reference, but discrete jumps in position estimators make it a poor reference frame for local sensing and acting.A localization component constantly re-computes the robot pose in the
map
frame based on sensor observations, therefore eliminating drift, but causing discrete jumps.When defining coordinate frames with respect to a global reference like the earth, the default procedure should be to align the origin to the ENU coordinate system. If there is no other reference, the default position of the z-axis should be zero at the height of the WGS84 ellipsoid.
Based on this understanding, it is clear to see that the ‘odom' frame corresponds to the FP_A-ODOMSH message in ENU coordinates (equivalently ‘/fixposition/odometry_smooth_enu') and the ‘map' frame corresponds to the FP_A-ODOMENU message (equivalently '/fixposition/odometry_enu').
Configuration
To use the Vision-RTK2 with Nav2, the following changes must be applied:
Enable the following messages in the I/O configuration page of the sensor (see I/O messages):
FP_A-ODOMETRY
FP_A-ODOMENU
FP_A-ODOMSH
FP_A-LLH
FP_A-EOE_FUSION
FP_A-TF_VRTKCAM
FP_A-TF_POIVRTK
FP_A-TF_ECEFENU0
FP_A-TF_POIPOISH
FP_A-TF_POIIMUH
In the configuration file of the ROS driver (fixposition_driver_ros2/launch/config.yaml), change the 'nav2_mode' to 'true' and 'qos_type' to 'default_long'.
Run the 'setup_ros_ws.sh' bash script to set up the fixposition driver accordingly.
(Optional) Configure wheelspeed measurements by setting up the corresponding configuration:
converter:
enabled: true
topic_type: "Odometry" # Supported types: nav_msgs/{Twist, TwistWithCov, Odometry}
input_topic: "/odom" # Input topic name
scale_factor: 1000.0 # To convert from the original unit of measurement to mm/s (note: this must be a float!)
use_x: true # Transmit the x axis of the input velocity
use_y: false # Transmit the y axis of the input velocity
use_z: false # Transmit the z axis of the input velocity
Expected TF tree
Once the Fixposition ROS driver with Nav2 support is running, the following TF tree will be generated:

Usage
With the Fixposition ROS driver correctly configured and the TF tree updated for Nav2 usage, the user must simply connect the ‘/odom' and '/map’ frames to the desired Nav2 blocks, such as waypoint navigators, obstacle avoidance and path planning algorithms, among others. The localization outputs in both frames will be handled directly by the ROS driver and the user only needs to focus on the navigation and control aspects of the system.
Example implementation
As an example implementation, we connected the Vision-RTK2 with an Scout robot using the Fixposition ROS driver. You can find more detailed in the repository: https://github.com/fixposition/nav2_tutorial.
Notes
Since the Vision-RTK2 is already doing the heavy lifting by fusing GNSS, IMU, and camera data into a reliable, smooth estimate, we recommend bypassing the traditional navsat_transform_node. Instead, you can write a custom node (or modify an existing one) that converts your sensor’s ECEF coordinates to your desired local coordinate frame (e.g., ENU) that’s consistent with your robot’s world frame.
It’s necessary to change the QoS settings to 'default_long'.
Methpd 2: Vision-RTK2 as the main localization sensor
This method is still under work in progress. This section will be updated over time.
Implementation details
The
odom
->base_link
transformation is usually computed via sensor fusion of odometry sensors (e.g., IMU, wheel encoders, VIO, etc) using therobot_localization
package.
The
map
->odom
transformation is usually provided by a different ROS package dealing with localization and mapping such as AMCL. For GNSS-based localization, we suggest following this guide: Navigating Using GPS Localization — Nav2 documentation.
It is not necessary to use the navsat_transform_node, as the Vision-RTK2 already provides an ENU output and the user can select the datum for the ECEF->ENU transformation in the Web Interface or the ‘/datum’ topic published by the Fixposition ROS driver (see navsat_transform_node documentation for more information).
We suggest using a rolling setup for the global costmap, as outdoors environments can get quite big, to a degree that it may not be practical to represent them on a single costmap. In addition, the sensor reports a fixed datum via the FP_A-TF_ECEFENU0 message. Thus, the GPS coordinates have a consistent Cartesian representation.
global_costmap:
global_costmap:
ros__parameters:
...
rolling_window: True
width: 50
height: 50