ChaofanTao / FAT_QuantizationLinks
Pytorch implementation for FAT: learning low-bitwidth parametric representation via frequency-aware transformation
☆27Updated 4 years ago
Alternatives and similar repositories for FAT_Quantization
Users that are interested in FAT_Quantization are comparing it to the libraries listed below
Sorting:
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆55Updated 5 years ago
- source code of the paper: Robust Quantization: One Model to Rule Them All☆40Updated 2 years ago
- ☆47Updated 5 years ago
- BitSplit Post-trining Quantization☆50Updated 3 years ago
- ☆57Updated 4 years ago
- Example for applying Gaussian and Laplace clipping on activations of CNN.☆34Updated 6 years ago
- official implementation of Generative Low-bitwidth Data Free Quantization(GDFQ)☆54Updated 2 years ago
- Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks☆69Updated 3 years ago
- Implement Towards Effective Low-bitwidth Convolutional Neural Networks☆41Updated 7 years ago
- Global Sparse Momentum SGD for pruning very deep neural networks☆44Updated 3 years ago
- PyTorch implementation of EdMIPS: https://arxiv.org/pdf/2004.05795.pdf☆60Updated 5 years ago
- ☆48Updated 5 years ago
- Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.