This repository provides the official code for CVPR2024 "Unleashing Unlabeled Data A Paradigm for Cross-View Geo-Localization".
If you find our work useful, please star this repository and cite our paper:
@inproceedings{li2024unleashing,
title={Unleashing Unlabeled Data: A Paradigm for Cross-View Geo-Localization},
author={Li, Guopeng and Qian, Ming and Xia, Gui-Song},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={16719--16729},
year={2024}
}
Please prepare VIGOR, CVUSA or CVACT. You need to put them in the "data/" folder or modify the path by "soft link" (ln -s
).
ββ UCVGL
βββ ckpt/
βββ VIGOR/
βββ CVUSA/
βββ CVACT/
βββ data/
βββ VIGOR/
βββ CVUSA/
βββ g2a/
βββ g2a_sat/
βββ streetview/panos
βββ bingmap/19
βββ CVACT/
βββ g2a/
βββ g2a_sat/
βββ streetview/
βββ satview_polish/
where "g2a/" is produced by CFP/GeometricProjection and "g2a_sat/" is generated by CFP/CycleGAN. You also can find them in this link.
1. conda create --name UCVGL python=3.8
2. conda activate UCVGL
3. conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia
4. pip install -r requirements.txt
For Step3, you should install corresponding Torch in this page according to your CUDA version.
bash run_CVUSA.sh
bash run_CVACT.sh
bash run_VIGOR.sh
- http://mvrl.cs.uky.edu/datasets/cvusa/
- https://github.com/Jeff-Zilence/VIGOR
- https://github.com/Liumouliu/OriCNN
- https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
- https://github.com/Jeff-Zilence/TransGeo2022
- https://github.com/Skyy93/Sample4Geo