Skip to content

Commit

Permalink
nvm_visualize View Map Poses (nasa#491)
Browse files Browse the repository at this point in the history
* view 3d map poses if no images available

* fixed jump to 3d view

* added only poses option

* added instructions to view map
  • Loading branch information
rsoussan authored May 16, 2022
1 parent 1c694f0 commit 61b1e8f
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 14 deletions.
8 changes: 8 additions & 0 deletions localization/sparse_mapping/build_map_from_multiple_bags.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,21 @@ Run

This builds individual maps in parallel for each spliced bag in the directory. The mapping process first removes low movement images from a bagfile to prevent triangulation issues and make the mapping process more efficient, then builds a surf map (detects image features, matches features, runs incremental bundle adjustment using successive images, runs a final round of bundle adjustment on the whole map).

View the map poses using
`rosrun sparse_mapping nvm_visualize -only_poses`
to ensure the maps were built successfully.


### 4. Merge SURF maps
A common merge strategy for a set of movements is to identify the longest movement that covers the most of an area and incrementally merge in maps that have sufficient overlap with this. Use:
`rosrun sparse_mapping merge_maps larger_map_name smaller_map_name -output_map combined_map_name -num_image_overlaps_at_endpoints 1000000`

First merge the new movements together using the above strategy, then optionally merge the resultant combined new movements map into an already existing map (that has some overlap with the new combined movements maps).

View the resulting map poses using
`rosrun sparse_mapping nvm_visualize -only_poses`
to ensure the maps were built successfully.

### 5. Register SURF map using provided world points
Since mapping is perfomed using monocular images, no set scale is provied and the map needs to be registered using real world points before being used for localization. Additionally, providing known point locations for images in the map can improve the accuracy of the map creation.

Expand Down
47 changes: 33 additions & 14 deletions localization/sparse_mapping/tools/nvm_visualize.cc
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,8 @@
#include <fstream>
#include <sstream>

DEFINE_bool(only_poses, false,
"Only view camera poses in 3D.");
DEFINE_bool(jump_to_3d_view, false,
"Skip displaying images and features, go directly to the 3D view.");
DEFINE_bool(skip_3d_points, false,
Expand Down Expand Up @@ -81,6 +83,8 @@ cv::Affine3f EigenToCVAffine(const Eigen::Affine3d & e) {

void Draw3DFrame(cv::viz::Viz3d* window, const sparse_mapping::SparseMap & map,
int cid, cv::Vec3f const& center) {
if (FLAGS_only_3d_points) return;

std::ostringstream str;

// Convert to an affine, perhaps change the origin
Expand All @@ -89,21 +93,30 @@ void Draw3DFrame(cv::viz::Viz3d* window, const sparse_mapping::SparseMap & map,

// Load up the image
cv::Mat image = cv::imread(map.GetFrameFilename(cid), cv::IMREAD_GRAYSCALE);
double f = map.GetCameraParameters().GetFocalLength();

if (!FLAGS_only_3d_points) {
if (FLAGS_skip_3d_images) {
window->showWidget(std::string("frame") + std::to_string(cid),
cv::viz::WCameraPosition(cv::Vec2f(2*atan(image.cols * 0.5 / f),
2*atan(image.rows * 0.5 / f)),
FLAGS_scale), camera_pose);
} else {
window->showWidget(std::string("frame") + std::to_string(cid),
cv::viz::WCameraPosition(cv::Vec2f(2*atan(image.cols * 0.5 / f),
2*atan(image.rows * 0.5 / f)),
image, FLAGS_scale), camera_pose);
}
if (image.empty() && !FLAGS_skip_3d_images) {
LOG(FATAL) << "Failed to load image.";
}

double f;
int cols;
int rows;
if (!image.empty()) {
f = map.GetCameraParameters().GetFocalLength();
cols = image.cols;
rows = image.rows;
} else {
// Set default values if no images are available to poses are still viewable
f = 600;
cols = 640;
rows = 480;
}
const auto viz_camera_position =
FLAGS_skip_3d_images
? cv::viz::WCameraPosition(cv::Vec2f(2 * atan(cols * 0.5 / f), 2 * atan(rows * 0.5 / f)), FLAGS_scale)
: cv::viz::WCameraPosition(cv::Vec2f(2 * atan(cols * 0.5 / f), 2 * atan(rows * 0.5 / f)), image, FLAGS_scale);

window->showWidget(std::string("frame") + std::to_string(cid),
viz_camera_position, camera_pose);
}

typedef struct {
Expand Down Expand Up @@ -484,6 +497,12 @@ int main(int argc, char** argv) {
return 0;
}

if (FLAGS_only_poses) {
FLAGS_jump_to_3d_view = true;
FLAGS_skip_3d_points = true;
FLAGS_skip_3d_images = true;
}

// Do a visualization of all the projected keypoints into the
// cameras. Navigate with the arrow keys or with 'a' and 'd'. Press
// 'c' to show the cameras and points in 3D.
Expand Down

0 comments on commit 61b1e8f

Please sign in to comment.