VITA-Group / M3ViTLinks
[NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue Liang*, Zhiwen Fan*, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang
☆122Updated 2 years ago
Alternatives and similar repositories for M3ViT
Users that are interested in M3ViT are comparing it to the libraries listed below
Sorting:
- ☆51Updated 10 months ago
- ☆46Updated last year
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆99Updated 2 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆72Updated 3 years ago
- Recent Advances on Efficient Vision Transformers☆51Updated 2 years ago
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆55Updated 2 years ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆96Updated 2 years ago
- PELA: Learning Parameter-Efficient Models with Low-Rank Approximation [CVPR 2024]☆18Updated last year
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Updated 2 years ago
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆72Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆31Updated last year
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆45Updated 9 months ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆53Updated last year
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆189Updated last year
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆159Updated 3 years ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆70Updated 11 months ago
- ☆27Updated 2 years ago
- ☆12Updated last year
- ☆11Updated last year
- [Neurips 2022] “ Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”, Ziyu Jiang*, Xuxi Chen*, Xueqin Huan…☆19Updated 2 years ago
- The official implementation of the ICML 2023 paper OFQ-ViT☆32Updated last year
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆247Updated 2 years ago
- Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)☆101Updated last year
- super-resolution; post-training quantization; model compression☆12Updated last year
- ☆21Updated last year
- (AAAI 2023 Oral) Pytorch implementation of "CF-ViT: A General Coarse-to-Fine Method for Vision Transformer"☆105Updated 2 years ago
- ☆90Updated 2 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆33Updated 2 years ago
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆24Updated 8 months ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆22Updated last year