As of Jan 22, 2024, the ApolloSim directory is available on robodata here /robodata/public_datasets/Datasets/Apollo_Sim_3D_Lane_Release
In step 1. Run the following commands to install the dependencies:
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch -c conda-forge
In Step 2, install mmcv<=1.6.2 and install jarvis using the following commands:
mim install mmcv-full==1.6.2
pip install ./wheels/jarvis-2021.4.2-py2.py3-none-any.whl
In Apollosim step 2, run the following command to generate annotation pickle files:
python apollosim.py /robodata/public_datasets/Datasets/Apollo_Sim_3D_Lane_Release
After installing the dependencies and building the dataset, run the following to test
Updated data_root
in configs/apollosim/anchor3dlane_iter.py
Added args.show = True
to produce visualization
python tools/test.py configs/apollosim/anchor3dlane_iter.py /robodata/arthurz/EcoCAR/Anchor3DLane/anchor3dlane_weights/apollo_anchor3dlane_iter.pth --show-dir outputs
This repo is the official PyTorch implementation for paper:
Anchor3DLane: Learning to Regress 3D Anchors for Monocular 3D Lane Detection. Accepted by CVPR 2023.
In this paper, we define 3D lane anchors in the 3D space and propose a BEV-free method named Anchor3DLane to predict 3D lanes directly from FV representations. 3D lane anchors are projected to the FV features to extract their features which contain both good structural and context information to make accurate predictions. We further extend Anchor3DLane to the multi-frame setting to incorporate temporal information for performance improvement.
- Release the checkpoints on the latest version of OpenLane dataset.
- Support generating predictions using one's own data.
- [2023/06/02] We have added the code to generate data lists in the data conversion tools.
- [2023/06/15] We have supported testing with multiple GPUs.
conda create -n lane3d python=3.7 -y
conda activate lane3d
conda install pytorch==1.9.1 torchvision==0.10.1 cudatoolkit=11.1 -c pytorch -y
pip install -U openmim
mim install mmcv-full
pip install -r requirements.txt
Refer to ONCE-3dlane to install jarvis
.
git clone https://github.com/tusen-ai/Anchor3DLane.git
cd Anchor3DLane
python setup.py develop
This repo is implemented based on open-mmlab mmsegmentation-v0.26.0. Refer to here for more detailed information of installation.
The data folders are organized as follows:
├── data/
| └── Apollosim
| └── data_splits
| └── standard
| └── train.json
| └── test.json
| └── ...
| └── data_lists/...
| └── images/...
| └── cache_dense/... # processed lane annotations
| └── OpenLane
| └── data_splits/...
| └── data_lists/... # list of training/testing data
| └── images/...
| └── lane3d_1000/... # original lane annotations
| └── cache_dense/...
| └── prev_data_release/... # temporal poses
| └── ONCE/
| └── raw_data/ # camera images
| └── annotations/ # original lane annotations
| └── train/...
| └── val/...
| └── data_splits/...
| └── data_lists/...
| └── cache_dense/
Note: For the data_lists files, you can generate it by running our data conversion tools as mentioned below. We also provide the data lists we used in the data/
folder of this repo.
1. Download dataset from ApolloSim Dataset and organize the data folder as mentioned above.
2. Change the data path in apollosim.py
and generate annotation pickle files by running:
cd tools/convert_datasets
python apollosim.py [apollo_root]
1. Refer to OpenLane Dataset for data downloading and organize the data folder as mentioned above.
2. Merge annotations and generate pickle files by running:
cd tools/convert_dataset
python openlane.py [openlane_root] --merge
python openlane.py [openlane_root] --generate
(optional) 3. If you wish to run the multi-frame experiments, you need to download the cross-frame pose data processed by us from Baidu Disk.
We also provide the cross-frame pose extraction script at tools/convert_dataset/openlane_temporal.py
to allow customized use.
You can fetch the raw pose data link at Baidu Disk or extract the raw pose data with tools provided in save_pose().
1. Refer to ONCE-3DLane Dataset for data downloading and organize the data folder as mentioned above.
2. Merge annotations and generate pickle files by running the following commands:
cd tools/convert_dataset
python once.py [once_root] --merge
python once.py [once_root] --generate
We provide the pretrained weights of Anchor3DLane and Anchor3DLane+(w/ iterative regression) on ApolloSim-Standard and ONCE-3DLane datasets. For OpenLane dataset, we additional provide weights for Anchor3DLane-T+(with multi-frame interaction).
Model | F1 | AP | x error close/m | x error far/m | z error close/m | z error far/m | Baidu Disk Link |
---|---|---|---|---|---|---|---|
Anchor3DLane | 95.6 | 97.2 | 0.052 | 0.306 | 0.015 | 0.223 | download |
Anchor3DLane+ | 97.1 | 95.4 | 0.045 | 0.300 | 0.016 | 0.223 | download |
Model | Backbone | F1 | Cate Acc | x error close/m | x error far/m | z error close/m | z error far/m | Baidu Disk Link |
---|---|---|---|---|---|---|---|---|
Anchor3DLane | ResNet-18 | 53.1 | 90.0 | 0.300 | 0.311 | 0.103 | 0.139 | download |
Anchor3DLane | EfficientNet-B3 | 56.0 | 89.0 | 0.293 | 0.317 | 0.103 | 0.130 | download |
Anchor3DLane+ | ResNet-18 | 53.7 | 90.9 | 0.276 | 0.311 | 0.107 | 0.138 | download |
Anchor3DLane-T+ | ResNet-18 | 54.3 | 90.7 | 0.275 | 0.310 | 0.105 | 0.135 | download |
Note: We use an earlier version of the Openlane dataset in our paper, whose annotations are significantly inconsistent in lane points' coordinates with the latest version as mentioned in this issue. Thus, it is normal if you cannot obtain the performances reported in our paper by testing the checkpoints we provided on the latest OpenLane validation set. But you can still reproduce the performances by training on the training set of the latest version. Meanwhile, we will also release checkpoints trained on the latest dataset for testing recently.
Model | Backbone | F1 | Precision | Recall | CD Error/m | Baidu Disk Link |
---|---|---|---|---|---|---|
Anchor3DLane | ResNet-18 | 74.44 | 80.50 | 69.23 | 0.064 | download |
Anchor3DLane | EfficientNet-B3 | 75.02 | 83.22 | 68.29 | 0.064 | download |
Anchor3DLane+ | ResNet-18 | 74.87 | 80.85 | 69.71 | 0.060 | download |
Run the following commands to evaluate the given checkpoint:
export PYTHONPATH=$PYTHONPATH:./gen-efficientnet-pytorch
python tools/test.py [config] [checkpoint] --show-dir [output_dir] --show(optional)
You can append --show
to generate visualization results in the output_dir/vis
.
For multi-gpu testing, run the following command:
bash tools/dist_test.sh [config] [checkpoint] [num_gpu] --show-dir [output_dir] --show(optional)
or
bash tools/slurm_test.sh [PARTITION] [JOB_NAME] [config] [checkpoint] --show-dir [output_dir] --show(optional)
1. Download the pretrained weights from Baidu Disk and put them in ./pretrained/
directory.
2. Modify the work_dir
in the [config]
file as your desired output directory.
3. For single-gpu trainning, run the following command:
export PYTHONPATH=$PYTHONPATH:./gen-efficientnet-pytorch
python tools/train.py [config]
4. For multi-gpu training, run the following commands:
bash tools/dist_train.sh [config] [num_gpu]
or
bash tools/slurm_train.sh [PARTITION] [JOB_NAME] [config]
We represent the visualization results of Anchor3DLane on ApolloSim, OpenLane and ONCE-3DLane datasets.
If you find this repo useful for your research, please cite
@inproceedings{huang2023anchor3dlane,
title = {Anchor3DLane: Learning to Regress 3D Anchors for Monocular 3D Lane Detection},
author = {Huang, Shaofei and Shen, Zhenwei and Huang, Zehao and Ding, Zi-han and Dai, Jiao and Han, Jizhong and Wang, Naiyan and Liu, Si},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2023}
}
For questions about our paper or code, please contact Shaofei Huang([email protected]).