This package runs the full pipeline for object detection (based on darknet_ros
), keypoint localization and semantic mapping.
NOTE: As of now, the IMU factors are relatively untested/unsupported; it is best to use an external odometry source and the ExternalOdometryHandler
odometry handler class (odometry_type == external
), or VISO-based odometry, within the semantic slam node / launch file.
- The object_keypoint_detector package is used to detect keypoints on provided images.
- The semantic_slam package is responsible for handling the semantic mapping process.
- The bag_extractor package is used to save data to timestamped images and
.npz
files that contain groundtruth poses, in order to add additional data to the training process. - The object_pose_interface_msgs package contains the necessary ROS interface messages.
Scripts to install all required dependencies are available at https://github.com/seanbow/xavier-setup-scripts (contrary to the name, these scripts function correctly both on desktop- and nVidia xavier- based platforms).
Alternatively,
- Install ROS -- only tested thus far on ROS Melodic and Noetic
sudo apt install libgoogle-glog-dev libpng++-dev ros-melodic-rosfmt
- Build and install Google's
ceres-solver
from source: https://github.com/ceres-solver/ceres-solver- Be sure to set
-DCMAKE_C_FLAGS="-march=native" -DCMAKE_CXX_FLAGS="-march=native"
when calling CMake or else you may run into memory alignment related issues and crashes
- Be sure to set
- Build and install GTSAM: https://github.com/borglab/gtsam. Make sure GTSAM_USE_SYSTEM_EIGEN and GTSAM_TYPEDEF_POINTS_TO_VECTORS are set to true.
Train a keypoint model with the pytorch keypoint training code found here
- Modify this launch file to:
- specify the path to your
num_keypoints_file
in the launch file. - specify the path to your model in the launch file (parameter
model_path
). - define your model type as
StackedHourglass
orCPN50
in the launch file (parametermodel_type
).
- specify the path to your
- Copy the files for the classes you used in your
num_keypoints_file
(and in your keypoint detection model) from theobjects_all
directory to theobjects
directory. - Except for the semantic SLAM launch file, other parameters are exposed in the files included in this folder for object detection, and in this folder for semantic SLAM. Note: In order to change the tracking rate, modify both
tracking_framerate
here andhz
here. - Run the semantic SLAM launch file.
- Copy your models in the
models
directory. You will need a.pt
and a.pkl
model. - Modify this launch file to point to the right model and camera topic.
- Run the launch file.
If you use this code in an academic publication, please cite the following work:
Sean L. Bowman, Nikolay Atanasov, Kostas Daniilidis, and George J. Pappas. "Probabilistic data association for semantic slam", in IEEE International Conference on Robotics and Automation (ICRA), 2017. doi: 10.1109/ICRA.2017.7989203.
This technology is the subject of US Patent 11,187,536