Skip to content
/ SPViT Public
forked from ziplab/SPViT

This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

License

Notifications You must be signed in to change notification settings

yangqbo/SPViT

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pruning Self-attentions into Convolutional Layers in Single Path

This is the official repository for our paper: Pruning Self-attentions into Convolutional Layers in Single Path by Haoyu He, Jing liu, Zizheng Pan, Jianfei Cai, Jing Zhang, Dacheng Tao and Bohan Zhuang.


Introduction:

To reduce the massive computational resource consumption for ViTs and add convolutional inductive bias, our SPViT prunes pre-trained ViT models into accurate and compact hybrid models by pruning self-attentions into convolutional layers. Thanks to the proposed weight-sharing scheme between self-attention and convolutional layers that cast the search problem as finding which subset of parameters to use, our SPViT has significantly reduced search cost.

Getting started:

In this repository, we provide code for pruning two representative ViT models.


If you find our paper useful, please consider cite:

@article{he2021Pruning,
  title={Pruning Self-attentions into Convolutional Layersin Single Path},
  author={He, Haoyu and Liu, Jing and Pan, Zizheng and Cai, Jianfei and Zhang, Jing and Tao, Dacheng and Zhuang, Bohan},
  journal={arXiv preprint arXiv:2111.11802},
  year={2021}
}

About

This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.6%
  • Shell 3.4%