Skip to content

Latest commit

 

History

History
32 lines (23 loc) · 6.69 KB

README.md

File metadata and controls

32 lines (23 loc) · 6.69 KB

CO-DETR

DETRs with Collaborative Hybrid Assignments Training

Abstract

In this paper, we provide the observation that too few queries assigned as positive samples in DETR with one-to-one set matching leads to sparse supervision on the encoder's output which considerably hurt the discriminative feature learning of the encoder and vice visa for attention learning in the decoder. To alleviate this, we present a novel collaborative hybrid assignments training scheme, namely Co-DETR, to learn more efficient and effective DETR-based detectors from versatile label assignment manners. This new training scheme can easily enhance the encoder's learning ability in end-to-end detectors by training the multiple parallel auxiliary heads supervised by one-to-many label assignments such as ATSS and Faster RCNN. In addition, we conduct extra customized positive queries by extracting the positive coordinates from these auxiliary heads to improve the training efficiency of positive samples in the decoder. In inference, these auxiliary heads are discarded and thus our method introduces no additional parameters and computational cost to the original detector while requiring no hand-crafted non-maximum suppression (NMS). We conduct extensive experiments to evaluate the effectiveness of the proposed approach on DETR variants, including DAB-DETR, Deformable-DETR, and DINO-Deformable-DETR. The state-of-the-art DINO-Deformable-DETR with Swin-L can be improved from 58.5% to 59.5% AP on COCO val. Surprisingly, incorporated with ViT-L backbone, we achieve 66.0% AP on COCO test-dev and 67.9% AP on LVIS val, outperforming previous methods by clear margins with much fewer model sizes.

Results and Models

Model Backbone Epochs Aug Dataset box AP Config Download
Co-DINO R50 12 LSJ COCO 52.0 config model\ log
Co-DINO* R50 12 DETR COCO 52.1 config model
Co-DINO* R50 36 LSJ COCO 54.8 config model
Co-DINO* Swin-L 12 DETR COCO 58.9 config model
Co-DINO* Swin-L 12 LSJ COCO 59.3 config model
Co-DINO* Swin-L 36 DETR COCO 60.0 config model
Co-DINO* Swin-L 36 LSJ COCO 60.7 config model
Co-DINO* Swin-L 16 DETR Objects365 pre-trained + COCO 64.1 config model

Note

  • Models labeled * are not trained by us, but from CO-DETR official website.
  • We find that the performance is unstable and may fluctuate by about 0.3 mAP.
  • If you want to save GPU memory by enabling checkpointing, please use the pip install fairscale command.