Skip to main content
Skip table of contents

Method 2: Vision-RTK2 as the main localization sensor

This method is still under work in progress. This section will be updated over time.

Implementation details

  • The odom->base_link transformation is usually computed via sensor fusion of odometry sensors (e.g., IMU, wheel encoders, VIO, etc) using the robot_localization package.

  • The map->odom transformation is usually provided by a different ROS package dealing with localization and mapping such as AMCL. For GNSS-based localization, we suggest following this guide: Navigating Using GPS Localization — Nav2 documentation.

  • It is not necessary to use the navsat_transform_node, as the Vision-RTK2 already provides an ENU output and the user can select the datum for the ECEF->ENU transformation in the Web Interface or the ‘/datum’ topic published by the Fixposition ROS driver (see navsat_transform_node documentation for more information).

  • We suggest using a rolling setup for the global costmap, as outdoors environments can get quite big, to a degree that it may not be practical to represent them on a single costmap. In addition, the sensor reports a fixed datum via the FP_A-TF_ECEFENU0 message. Thus, the GPS coordinates have a consistent Cartesian representation.

YAML
global_costmap:
  global_costmap:
    ros__parameters:
      ...
      rolling_window: True
      width: 50
      height: 50

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.