[ECCV 2024] The official codebase for the paper "V2X-Real: a Large-Scale Dataset for Vehicle-to-Everything Cooperative Perception"
This is the official implementation of ECCV 2024 paper "V2X-Real: a Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception". Hao Xiang, Zhaoliang Zheng, Xin Xia, Runsheng Xu, Letian Gao, Zewei Zhou, Xu Han, Xinkai Ji, Mingxi Li, Zonglin Meng, Jin Li, Mingyue Lei, Zhaoyang Ma, Zihang He, Haoxuan Ma, Yunshuang Yuan, Yingqian Zhao, Jiaqi Ma
Supported by the UCLA Mobility Lab.
- Support both simulation and real-world cooperative perception dataset
- V2X-Real
- OPV2V
- Support multi-class multi-agent 3D object detection.
- SOTA model supported
Please check website to download the data. The data is in OPV2V format.
After downloading the data, please put the data in the following structure:
├── v2xreal
│ ├── train
| |── 2023-03-17-15-53-02_1_0
│ ├── validate
│ ├── test
V2X-Real is build upon OpenCOOD. Compared to OpenCOOD, this framework supports multi-class object detection while OpenCOOD only supports single-class (i.e., vehicle) detection. V2X-Real groups object types with similar sizes to the same meta-class for conducting the learning.
Please refer to the following steps for the environment setup:
# Create conda environment (python >= 3.7)
conda create -n v2xreal python=3.7
conda activate v2xreal
# pytorch installation
conda install pytorch==1.12.0 torchvision==0.13.0 cudatoolkit=11.3 -c pytorch -c conda-forge
# spconv 2.x Installation
pip install spconv-cu113
# Install other dependencies
pip install -r requirements.txt
python setup.py develop
# Install bbx nms calculation cuda version
python opencood/utils/setup.py build_ext --inplace
For training, please run:
python opencood/tools/train_da.py --hypes_yaml hypes_yaml/xxxx.yaml --half
Attributes Explanations:
hypes_yaml
: the path for the yaml configuration of the cooperative perception models.
For inference, please run the following command:
python opencood/tools/inference.py --model_dir ${CHECKPOINT_FOLDER} --fusion_method ${FUSION_STRATEGY} [--show_vis] [--show_sequence]
Attributes Explanations:
model_dir
: the path to your saved model.fusion_method
: indicate the fusion strategy, currently support 'nofusion', 'early', 'late', and 'intermediate'.show_vis
: whether to visualize the detection overlay with point cloud.show_sequence
: the detection results will visualized in a video stream. It can NOT be set withshow_vis
at the same time.
To switch the dataset modes, please change the dataset_mode
within yaml config file or the flag within the script.
Supported options:
vc
: V2X-Real-VC where the ego agent is fixed as the autonomous vehicle while the collaborators include both the infrastructure and vehicle.ic
: V2X-Real-IC where the infrastructure is chosen as the ego agent and the neighboring vehicles and infrastructure can collaborate with the ego infrastructure via sharing sensing observations. The final evaluation is conducted in the ego infrastructure side.v2v
: V2X-Real-V2V where only Vehicle-to-Vehicle collaboration is considered.i2i
: V2X-Real-I2I where infrastructure-to-Infrastructure collaborations are studied.
@article{xiang2024v2x,
title={V2X-Real: a Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception},
author={Xiang, Hao and Zheng, Zhaoliang and Xia, Xin and Xu, Runsheng and Gao, Letian and Zhou, Zewei and Han, Xu and Ji, Xinkai and Li, Mingxi and Meng, Zonglin and others},
journal={arXiv preprint arXiv:2403.16034},
year={2024}
}