MingSun-Tse / Awesome-Efficient-ViT
Recent Advances on Efficient Vision Transformers
☆47Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Awesome-Efficient-ViT
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆31Updated last year
- In progress.☆65Updated 7 months ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated last year
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆48Updated 11 months ago
- ☆42Updated last year
- Pytorch implementation of our paper accepted by CVPR 2022 -- IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Sh…☆31Updated 2 years ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆30Updated 2 years ago
- ☆29Updated 2 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆89Updated last year
- ☆41Updated 2 months ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆31Updated 11 months ago
- ☆10Updated last year
- Awasome Papers and Resources in Deep Neural Network Pruning with Source Code.☆134Updated 2 months ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆83Updated last year
- [ICLR'21] Neural Pruning via Growing Regularization (PyTorch)☆83Updated 3 years ago
- ☆68Updated 2 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated last year
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆97Updated last year
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆29Updated 2 years ago
- [Neurips 2022] “ Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”, Ziyu Jiang*, Xuxi Chen*, Xueqin Huan…☆19Updated last year
- code for NASViT☆66Updated 2 years ago
- The official implementation of the ICML 2023 paper OFQ-ViT☆27Updated last year
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆29Updated 3 months ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆69Updated 2 years ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆53Updated 3 months ago
- Transformers trained on Tiny ImageNet☆47Updated 2 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆90Updated 11 months ago
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆51Updated last year
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆72Updated 2 months ago