This repository hosts the source code of our paper: ASTD.
major challenges:
The network structure:
- [07/2024] 📣We released the code.
Run pip install -r requirements.txt
in the root directory of the project.
Let's say $ROOT
is the root directory.
data
├── CUHK-SYSU
├── PRW
exp_cuhk
├── config.yaml
├── epoch_xx.pth
├── epoch_xx.pth
exp_prw
├── config.yaml
├── epoch_xx.pth
├── epoch_xx.pth
- Following the link in the above table, download our pretrained model to anywhere you like, e.g.,
$ROOT/exp_cuhk
- Run an inference demo by specifing the paths of checkpoint and corresponding configuration file. You can checkout the result in
demo_imgs
directory.
CUHK-SYSU:
CUDA_VISIBLE_DEVICES=0 python demo.py --cfg ./configs/cuhk_sysu.yaml --ckpt ./logs/cuhk-sysu/xxx.pth
PRW:
CUDA_VISIBLE_DEVICES=0 python demo.py --cfg ./configs/prw.yaml --ckpt ./logs/prw/xxx.pth
Pick one configuration file you like in $ROOT/configs
, and run with it.
python train.py --cfg configs/cuhk_sysu.yaml
Note: At present, our script only supports single GPU training, but distributed training will be also supported in future. By default, the batch size and the learning rate during training are set to 3 and 0.003 respectively, which requires about 28GB of GPU memory. If your GPU cannot provide the required memory, try smaller batch size and learning rate (performance may degrade). Specifically, your setting should follow the Linear Scaling Rule: When the minibatch size is multiplied by k, multiply the learning rate by k. For example:
CUHK:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/cuhk_sysu.yaml INPUT.BATCH_SIZE_TRAIN 3 SOLVER.BASE_LR 0.003 SOLVER.MAX_EPOCHS 20 SOLVER.LR_DECAY_MILESTONES [11] MODEL.LOSS.USE_SOFTMAX True SOLVER.LW_RCNN_SOFTMAX_2ND 0.1 SOLVER.LW_RCNN_SOFTMAX_3RD 0.1 OUTPUT_DIR ./logs/cuhk-sysu
if out of memory, run this:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/cuhk_sysu.yaml INPUT.BATCH_SIZE_TRAIN 2 SOLVER.BASE_LR 0.0012 SOLVER.MAX_EPOCHS 20 SOLVER.LR_DECAY_MILESTONES [11] MODEL.LOSS.USE_SOFTMAX True SOLVER.LW_RCNN_SOFTMAX_2ND 0.1 SOLVER.LW_RCNN_SOFTMAX_3RD 0.1 OUTPUT_DIR ./logs/cuhk-sysu
PRW:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/prw.yaml INPUT.BATCH_SIZE_TRAIN 3 SOLVER.BASE_LR 0.003 SOLVER.MAX_EPOCHS 14 SOLVER.LR_DECAY_MILESTONES [11] MODEL.LOSS.USE_SOFTMAX True SOLVER.LW_RCNN_SOFTMAX_2ND 0.1 SOLVER.LW_RCNN_SOFTMAX_3RD 0.1 OUTPUT_DIR ./logs/prw
Tip: If the training process stops unexpectedly, you can resume from the specified checkpoint.
python train.py --cfg configs/cuhk_sysu.yaml --resume --ckpt /path/to/your/checkpoint
Suppose the output directory is $ROOT/exp_cuhk
. Test the trained model:
For CUHK-SYSU:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg ./configs/cuhk_sysu.yaml --eval --ckpt ./logs/cuhk-sysu/xxx.pth
Test with Context Bipartite Graph Matching algorithm:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg ./configs/cuhk_sysu.yaml --eval --ckpt ./logs/cuhk-sysu/xxx.pth EVAL_USE_CBGM True
Test the upper bound of the person search performance by using GT boxes:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg ./configs/cuhk_sysu.yaml --eval --ckpt ./logs/cuhk-sysu/xxx.pth EVAL_USE_GT True
For PRW:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg ./configs/prw.yaml --eval --ckpt ./logs/prw/xxx.pth EVAL_USE_CBGM True
Thanks to the authors of the following repos for their code, which was integral in this project:
Pull request is welcomed! Before submitting a PR, DO NOT forget to run ./dev/linter.sh
that provides syntax checking and code style optimation.
If you find this code useful for your research, please cite our paper
@inproceedings{zqx2024,
title={xxxx},
author={Zhang, Qixian and Miao, Duoqian},
booktitle={xxxxxx},
volume={xx},
number={x},
pages={xxx--xxx},
@inproceedings{li2021sequential,
title={Sequential End-to-end Network for Efficient Person Search},
author={Li, Zhengjia and Miao, Duoqian},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={35},
number={3},
pages={2011--2019},
year={2021}
}
If you have any question, please feel free to contact us. E-mail: [email protected]