Lists (1)
Sort Name ascending (A-Z)
Stars
[ICCV 2023] I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
A target detector designed for small target pets.
推荐系统入门教程,在线阅读地址:https://datawhalechina.github.io/fun-rec/
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models
Flops counter for convolutional networks in pytorch framework
Federated gradient boosted decision tree learning
[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Transformer related optimization, including BERT, GPT
[ECCV2022] Official Implementation of paper "V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer"
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
🔥 🔥 [WACV2024] Mini but Mighty: Finetuning ViTs with Mini Adapters
An official code release of the paper RGB no more: Minimally Decoded JPEG Vision Transformers
[CVPR 2020] "Learning to Structure an Image with Few Colors". Critical structure for network recognition. #explainable-ai
(CVPR 2022) A minimalist, mapless, end-to-end self-driving stack for joint perception, prediction, planning and control.
Lanelet maps have been introduced in the context of the autonomous completion of the Bertha-Benz-Memorial-Route in 2013