yanghr / BSQLinks
BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)
☆42Updated 4 years ago
Alternatives and similar repositories for BSQ
Users that are interested in BSQ are comparing it to the libraries listed below
Sorting:
- ☆40Updated 2 years ago
- Neural Network Quantization With Fractional Bit-widths☆11Updated 4 years ago
- BitSplit Post-trining Quantization☆50Updated 3 years ago
- The PyTorch implementation of Learned Step size Quantization (LSQ) in ICLR2020 (unofficial)☆139Updated 5 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆98Updated 4 years ago
- ☆19Updated 3 years ago
- Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks☆68Updated 4 years ago
- PyTorch implementation of EdMIPS: https://arxiv.org/pdf/2004.05795.pdf☆60Updated 5 years ago
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆13Updated 4 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- Conditional channel- and precision-pruning on neural networks☆72Updated 5 years ago
- Any-Precision Deep Neural Networks (AAAI 2021)☆61Updated 5 years ago
- [CVPR'20] ZeroQ Mixed-Precision implementation (unofficial): A Novel Zero Shot Quantization Framework☆14Updated 4 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆35Updated 2 years ago
- ☆78Updated 3 years ago
- Simulator for BitFusion☆102Updated 5 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization