Skip to content

Latest commit

 

History

History
51 lines (37 loc) · 3.11 KB

README.md

File metadata and controls

51 lines (37 loc) · 3.11 KB

dloc - the deep learning image matching toolbox

This repository provides accessible interfaces for several existing SotA methods to match image feature correspondences between image pairs. We provide scripts to evaluate their predicted correspondences on common benchmarks for the tasks of image matching, homography estimation, and visual localization.

Support Methods

The code supports three processes: co-view area estimation, feature point extraction, and feature point matching, and supports detector-base and detector-free methods, mainly including:

  • d2net: extract keypoint from 1/8 feature map
  • superpoint: could extract in corner points, pretrained with magicpoint
  • superglue: excellent matching algorithm, but pretrained model only support superpoint, we have implementation superglue with sift/superpoint in megadepth datasets.
  • disk: add reinforcement for keypoints extraction
  • aslfeat: build multiscale extraction network
  • cotr: build transformer network for points matching
  • loftr: dense extraction and matching with end-to-end network
  • r2d2: add repeatability and reliability for keypoints extraction
  • contextdesc: keypoints use sift, use full image context to enhance descriptor. expensive calculation.
  • OETR: image pairs co-visible area estimation.

Installation

This repository support different SOTA methods. If you want use this code, you could reference these steps:

  1. Download this repository and initialize the submodules to the third_party folder
git clone

# Install submodules non-recursively
cd OETR/
git submodule update --init
  1. install requirements for different submodules
  2. Download model weights and place them in the weights folder, and weights could be downloaded from https://drive.google.com/drive/folders/1UedCycHJph4PDoStAAyxtdRxUX9PwLsJ?usp=sharing.

Inference and evaluation

Download the image pairs and relative pose groundtruth of imc and megadepth to assets/ folder. You could also chose dataset and methods, please reference to evaluate_imc.sh and evaluate_megadepth.sh:

  1. Benchmark on IMC dataset sh evaluate_imc.sh

  2. Benchmark on Megadepth dataset sh evaluate_megadepth.sh

You can choose only to execute part of the algorithm inside. After the results process, you could run an evaluation pipeline for imc or megadepth:

python3 dloc/evaluate/eval_imc.py --input_pairs ./assets/imc/imc_0.1.txt --results_path outputs/imc_2011/ --methods_file assets/methods.txt
or
python3 dloc/evaluate/eval_megadepth.py --input_pairs ./assets/megadepth/megadepth_scale_34.txt --results_path outputs/megadepth_34/ --methods_file assets/methods.txt