ziplab / EcoFormerLinks
[NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"
☆72Updated 2 years ago
Alternatives and similar repositories for EcoFormer
Users that are interested in EcoFormer are comparing it to the libraries listed below
Sorting:
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wa…☆76Updated 3 years ago
- This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".☆120Updated 3 years ago
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆31Updated 2 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆53Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated 2 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆33Updated 2 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆90Updated last year
- Official PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"☆65Updated 2 years ago
- ☆57Updated 4 years ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆71Updated 3 years ago
- code for NASViT☆66Updated 3 years ago
- [ICCV 2021] Official implementation of "Scalable Vision Transformers with Hierarchical Pooling"☆33Updated 3 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆18Updated 2 years ago
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated 4 months ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- ☆46Updated last year
- ☆24Updated 3 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆50Updated 3 years ago
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆24Updated last year
- This is a offical PyTorch/GPU implementation of SupMAE.☆78Updated 2 years ago
- Recent Advances on Efficient Vision Transformers☆51Updated 2 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆71Updated 3 years ago
- Code for our ICLR'2022 paper "Generalizing Few-Shot NAS with Gradient Matching"☆22Updated 2 years ago
- ☆21Updated 2 years ago
- Official Pytorch implementation of Super Vision Transformer (IJCV)☆43Updated last year
- open source the research work for published on arxiv. https://arxiv.org/abs/2106.02689☆52Updated 3 years ago
- Code repo for the paper BiT Robustly Binarized Multi-distilled Transformer☆109Updated 2 years ago
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆36Updated last year
- A simple minimal implementation of Reversible Vision Transformers☆125Updated last year