Skip to content

3D mapping using a 2D laser scanner and IMU-aided visual SLAM

Notifications You must be signed in to change notification settings

qihangwang/3D-mapping

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Real-time 3D-mapping using a 2D laser scanner and IMU-aided visual SLAM

Real-time 3D mapping using a 2D laser scanner with pose estimates from an IMU-aided visual SLAM system.

Accurate motion estimation of a robot is achieved by visual-inertial fusion based on an extended Kalman filter (EKF). Range measurements scanned on the vertical plane are received constantly by a 2D laser scanner mounted on the robot, which are re-organized as point clouds in Cartesian space. With the pose estimates of the robot, the point clouds can be transformed into the world frame in real time. Furthermore,these point clouds received between two consecutive keyframes of the visual SLAM system are accumulated together into a unit relative to a keyframe, which can be corrected later by loop closing in visual SLAM. The 3D globally consistent map is built simultaneously by gathering these units of point clouds.

This work is based on the previous work described in [1,2].

[1] Raul Mur-Artal and Juan D Tardos. ORB-SLAM2: an open-source slam system for monocular, stereo and rgb-d cameras. arXiv preprint arXiv:1610.06475, 2016.

[2] S Weiss and R Siegwart. Real-time metric state estimation for modular vision-inertial systems. In IEEE International Conference on Robotics and Automation, pages 4531–4537, 2011.

ORB-SLAM2_mapping is a real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (in the stereo and RGB-D case with true scale). It is able to detect loops and relocalize the camera in real time. Real-time 3D mapping is achieved using laser scans with pose estimation from visual-inertial fusion.

As for ORB_SLAM2_mapping, the following libraries are used:

  1. Pangolin is used for visualization and user interface (dowload and install instructions can be found at: https://github.com/stevenlovegrove/Pangolin).

  2. OpenCV to manipulate images and features. Dowload and install instructions can be found at: http://opencv.org. Required at leat 2.4.3. Tested with OpenCV 2.4.11 and OpenCV 3.2.

  3. DBoW2 library is used to perform place recognition and g2o library is used to perform non-linear optimizations. Both modified libraries (which are BSD) are included in the Thirdparty folder.

    Eigen3 is required by g2o (download and install instructions can be found at: http://eigen.tuxfamily.org. Required at least 3.1.0).

ethzasl_xsens_driver is a ROS Driver for XSens MT/MTi/MTi-G devices.

ethzasl_msf is a modular framework for multi sensor fusion based on an Extended Kalman Filter (EKF).

Make sure that all of our nodes' times are based off of simulation time

rosparam set /use_sim_time true

Running RGB_D Node in visual SLAM

roslaunch ORB_SLAM2 test.launch

Running the multi sensor fusion Framework

roslaunch msf_updates pose_sensor.launch

Playing back the bag

rosbag play --clock 001.bag

About

3D mapping using a 2D laser scanner and IMU-aided visual SLAM

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 69.2%
  • Makefile 16.5%
  • Python 5.1%
  • MATLAB 4.4%
  • CMake 3.3%
  • C 1.4%
  • Shell 0.1%