houlu369 / Loss-aware-weight-quantizationLinks
Implementation of ICLR 2018 paper "Loss-aware Weight Quantization of Deep Networks"
☆26Updated 5 years ago
Alternatives and similar repositories for Loss-aware-weight-quantization
Users that are interested in Loss-aware-weight-quantization are comparing it to the libraries listed below
Sorting:
- Implementation of ICLR 2017 paper "Loss-aware Binarization of Deep Networks"☆18Updated 6 years ago
- This repository represents training examples for the CVPR 2018 paper "SYQ:Learning Symmetric Quantization For Efficient Deep Neural Netwo…☆31Updated 5 years ago
- Codes for AAAI2019 paper: Deep Neural Network Quantization via Layer-Wise Optimization using Limited Training Data☆41Updated 6 years ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54Updated 5 years ago
- source code of the paper: Robust Quantization: One Model to Rule Them All☆40Updated 2 years ago
- ProxQuant: Quantized Neural Networks via Proximal Operators☆29Updated 6 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆21Updated 4 years ago
- ☆52Updated 6 years ago
- Proximal Mean-field for Neural Network Quantization☆22Updated 5 years ago
- Binary Convolution Network for faster real-time processing in ASICs☆56Updated 7 years ago
- Reducing the size of convolutional neural networks☆112Updated 7 years ago
- ☆28Updated 4 years ago
- ☆15Updated 5 years ago
- ☆35Updated 5 years ago
- Implementation of NeurIPS 2019 paper "Normalization Helps Training of Quantized LSTM"☆31Updated 11 months ago
- Sparse Recurrent Neural Networks -- Pruning Connections and Hidden Sizes (TensorFlow)☆74Updated 4 years ago
- Codes for accepted paper "Cooperative Pruning in Cross-Domain Deep Neural Network Compression" in IJCAI 2019.☆12Updated 5 years ago
- Pytorch implementation for FAT: learning low-bitwidth parametric representation via frequency-aware transformation☆27Updated 4 years ago
- A Unified, Systematic Framework of Structured Weight Pruning for DNNs☆22Updated 6 years ago
- The collection of training tricks of binarized neural networks.☆72Updated 4 years ago
- 3rd place solution for NeurIPS 2019 MicroNet challenge☆35Updated 5 years ago
- Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression☆49Updated 2 years ago
- This is a PyTorch implementation of the Scalpel. Node pruning for five benchmark networks and SIMD-aware weight pruning for LeNet-300-100…☆41Updated 6 years ago
- Example for applying Gaussian and Laplace clipping on activations of CNN.☆34Updated 6 years ago
- ☆46Updated 5 years ago
- Implementation of BinaryConnect on Pytorch☆39Updated 4 years ago
- Global Sparse Momentum SGD for pruning very deep neural networks☆44Updated 2 years ago
- Quantize weights and activations in Recurrent Neural Networks.☆94Updated 6 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules☆42Updated 2 years ago