ziplab / SAQ
This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".
☆41Updated 3 years ago
Alternatives and similar repositories for SAQ:
Users that are interested in SAQ are comparing it to the libraries listed below
- Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples [NeurIPS 2021]☆31Updated 3 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆32Updated last year
- In progress.☆63Updated 11 months ago
- This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is…☆25Updated 3 years ago
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆69Updated 3 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆52Updated last year
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- ☆43Updated last year
- ☆23Updated 3 years ago
- official implementation of Generative Low-bitwidth Data Free Quantization(GDFQ)☆53Updated last year
- ☆25Updated 3 years ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Updated 2 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- ☆43Updated last year
- ☆75Updated 2 years ago
- ☆17Updated 2 years ago
- ☆16Updated 2 years ago
- Code for ICML 2021 submission☆35Updated 4 years ago
- PyTorch implementation of Towards Efficient Training for Neural Network Quantization