MingSun-Tse / Awesome-Efficient-ViTLinks
Recent Advances on Efficient Vision Transformers
☆55Updated 3 years ago
Alternatives and similar repositories for Awesome-Efficient-ViT
Users that are interested in Awesome-Efficient-ViT are comparing it to the libraries listed below
Sorting:
- In progress.☆67Updated last year
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- ☆48Updated 2 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Updated 4 months ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆55Updated 2 years ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆73Updated 3 years ago
- [ICCV 2023] Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks☆25Updated 2 years ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆36Updated last year
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆47Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Updated 2 years ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆80Updated last year
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Updated 2 years ago
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆134Updated 3 years ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Updated last year
- ☆13Updated 2 years ago
- Code repo for the paper BiT Robustly Binarized Multi-distilled Transformer☆114Updated 2 years ago
- ☆53Updated last year
- Pytorch implementation of our paper accepted by CVPR 2022 -- IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Sh…☆35Updated 3 years ago
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆73Updated 4 years ago
- Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.☆138Updated 3 years ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆73Updated 3 years ago
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆80Updated last year
- ☆25Updated 4 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆104Updated 2 years ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆33Updated 4 years ago
- ☆78Updated 3 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆77Updated 3 years ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆105Updated 2 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆31Updated 3 years ago
- [ICLR'21] Neural Pruning via Growing Regularization (PyTorch)☆82Updated 4 years ago