VITA-Group / M3ViT
[NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue Liang*, Zhiwen Fan*, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang
☆104Updated 2 years ago
Alternatives and similar repositories for M3ViT:
Users that are interested in M3ViT are comparing it to the libraries listed below
- ☆42Updated last year
- ☆43Updated 5 months ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆86Updated last year
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆93Updated last year
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆71Updated 2 years ago
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆62Updated 9 months ago
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆51Updated 2 years ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆30Updated 2 years ago
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆41Updated 4 months ago
- ☆12Updated last year
- Official PyTorch implementation of Which Tokens to Use? Investigating Token Reduction in Vision Transformers presented at ICCV 2023 NIVT …☆33Updated last year
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆52Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆152Updated 2 years ago
- ☆17Updated 2 years ago
- Recent Advances on Efficient Vision Transformers☆49Updated 2 years ago
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆21Updated 3 months ago
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆101Updated last month
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆126Updated 3 months ago
- ☆26Updated 2 years ago
- Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)☆98Updated 9 months ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆65Updated 6 months ago
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆65Updated last year
- The official implementation of the ICML 2023 paper OFQ-ViT☆30Updated last year
- ☆18Updated last year
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆175Updated last year
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆52Updated last year
- PyTorch code for Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆37Updated 5 months ago
- DeiT implementation for Q-ViT☆24Updated 2 years ago
- Official Pytorch implementation of Dynamic-Token-Pruning (ICCV2023)☆19Updated last year