Extension and update of M2DGR: a novel Multi-modal and Multi-scenario SLAM Dataset for Ground Robots (ICRA2022 & ICRA2024)
First Author: Jie Yin
Figure 1. Acquisition Platform and Diverse Scenarios.
This paper has been accepted by ICRA2024! So I decide to release the dataset with its calibration results and GT trajectories right now. Feel free to utilize this full dataset to facilitate your research on SLAM. Please give me a star if you like it.
This work is licensed under MIT license. International License and is provided for academic purpose. If you are interested in our project for commercial purposes, please contact us on [email protected] for further communication.
If you use this work in an academic work, please cite:
@ARTICLE{9664374,
author={Yin, Jie and Li, Ang and Li, Tao and Yu, Wenxian and Zou, Danping},
journal={IEEE Robotics and Automation Letters},
title={M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/LRA.2021.3138527}}
The calibration results are here. All the sensors and track devices and their most important parameters are listed as below:
- LIDAR Robosense 16, 360 Horizontal Field of View (FOV),-30 to +10 vertical FOV,10Hz,Max Range 200 m,Range Resolution 3 cm, Horizontal Angular Resolution 0.2°.
- GNSS Ublox F9p, GPS/BeiDou/Glonass/Galileo, 1Hz
- V-I Sensor,Realsense d435i,RGB/Depth 640*480,69H-FOV,42.5V-FOV,15Hz;IMU 6-axix, 200Hz
- IMU,wheeltec,9-axis,50Hz;
- GNSS-IMU Xsens Mti 680G. GNSS-RTK,localization precision 2cm,100Hz;IMU 9-axis,100 Hz;
- Motion-capture System Vicon Vero 2.2, localization accuracy 1mm, 50 Hz;
The rostopics of our rosbag sequences are listed as follows:
-
3D LIDAR:
/rslidar_points
-
2D LIDAR:
/scan
-
Odom:
/odom
-
GNSS Ublox F9p:
/ublox_driver/ephem
,
/ublox_driver/glo_ephem
,
/ublox_driver/range_meas
,
/ublox_driver/receiver_lla
,
/ublox_driver/receiver_pvt
-
V-I Sensor:
/camera/color/image_raw
,
/camera/imu
-
IMU:
/imu
Sequence Name | Collection Date | Total Size | Duration | Features | Rosbag |
---|---|---|---|---|---|
Anomaly | 2023-8 | 1.5g | 57s | wheel anomaly | Rosbag |
Switch | 2023-8 | 9.5g | 292s | indoor-outdoor switch | Rosbag |
Tree | 2023-8 | 3.7g | 160s | Dense tree leave cover | Rosbag |
Bridge_01 | 2022-11 | 2.4g | 75s | Bridge, Zigzag | Rosbag |
Bridge_02 | 2022-11 | 16.0g | 501s | Bridge, Long-term,Straight line | Rosbag |
Street_01 | 2022-11 | 1.7g | 58s | Street, Straight line | Rosbag |
Street_02 | 2022-11 | 3.9g | 126s | Bridge, Sharp turn | Rosbag |
Parking_01 | 2022-11 | 3.3g | 105s | Parking lot, Side moving | Rosbag |
Parking_02 | 2022-11 | 5.4g | 149s | Parking lot, Rectangle loop | Rosbag |
Building_01 | 2022-11 | 3.7g | 120s | Building, Far features | Rosbag |
Building_02 | 2022-11 | 3.4g | 110s | Building, Far features | Rosbag |
We test methods with diverse senser settings to validate our benchmark dataset. Results shown that our dataset is a valid and effective testfield for localization methods.
And in some cases, our Ground-Fusion achieves comparable performance to Lidar SLAM!
Figure 2. The ATE RMSE (m) result on some sequences.
Figure 3. The visualized trajectory.
We provide configuration files for several cutting-edge baseline methods, including VINS-RGBD,TartanVO,VINS-Mono and VIW-Fusion and GVINS.