ziplab / SAQLinks
This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".
☆44Updated 4 years ago
Alternatives and similar repositories for SAQ
Users that are interested in SAQ are comparing it to the libraries listed below
Sorting:
- An official implementation of "Network Quantization with Element-wise Gradient Scaling" (CVPR 2021) in PyTorch.☆94Updated 2 years ago
- official implementation of Generative Low-bitwidth Data Free Quantization(GDFQ)☆55Updated 2 years ago
- Pytorch implementation of TPAMI 2022 -- 1xN Pattern for Pruning Convolutional Neural Networks☆42Updated 3 years ago
- [ICLR'21] Neural Pruning via Growing Regularization (PyTorch)☆82Updated 4 years ago
- ☆17Updated 3 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- ☆43Updated last year
- In progress.☆67Updated last year
- Pytorch implementation of our paper accepted by NeurIPS 2020 -- Rotated Binary Neural Network☆83Updated 3 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated 2 years ago
- ☆78Updated 3 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆93Updated 3 years ago
- PyTorch implementation of SSQL (Accepted to ECCV2022 oral presentation)☆73Updated 2 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- [NeurIPS 2020] ShiftAddNet: A Hardware-Inspired Deep Network☆74Updated 5 years ago
- ☆17Updated 3 years ago
- Code for ICML 2021 submission