VITA-Group / SViTE
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
☆89Updated last year
Alternatives and similar repositories for SViTE:
Users that are interested in SViTE are comparing it to the libraries listed below
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆52Updated last year
- This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".☆40Updated 3 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆52Updated 3 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆71Updated 2 years ago
- NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021☆38Updated 3 years ago
- ☆42Updated last year
- S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)☆64Updated 3 years ago
- [CVPRW 21] "BNN - BN = ? Training Binary Neural Networks without Batch Normalization", Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu…☆57Updated 3 years ago
- ☆44Updated 6 months ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆70Updated 2 years ago
- (ICCV 2021) BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search☆139Updated 3 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆32Updated last year
- code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"☆104Updated 3 years ago
- This repository implements the paper "Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations"☆20Updated 3 years ago
- How Do Adam and Training Strategies Help BNNs Optimization? In ICML 2021.☆59Updated 3 years ago
- Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better perfo…☆90Updated 2 years ago
- code for NASViT☆68Updated 2 years ago
- ☆26Updated 2 years ago
- ☆16Updated 2 years ago
- PyTorch implementation of BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models☆25Updated 2 years ago
- [ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yinin…☆30Updated last year
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wa…☆76Updated 3 years ago
- ☆17Updated 2 years ago
- [Neurips 2022] “ Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”, Ziyu Jiang*, Xuxi Chen*, Xueqin Huan…☆19Updated last year
- [NeurIPS 2020] ShiftAddNet: A Hardware-Inspired Deep Network☆71Updated 4 years ago
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆69Updated 3 years ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54Updated 4 years ago
- In progress.☆63Updated 11 months ago
- [CVPR 2021] Contrastive Neural Architecture Search with Neural Architecture Comparators☆40Updated 2 years ago
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆52Updated 2 years ago