lgalke / torch-pruningLinks
Pruning methods for pytorch with an optimizer-like interface
☆15Updated 5 years ago
Alternatives and similar repositories for torch-pruning
Users that are interested in torch-pruning are comparing it to the libraries listed below
Sorting:
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- All about acceleration and compression of Deep Neural Networks☆33Updated 6 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆51Updated 3 years ago
- ☆52Updated 6 years ago
- Code to implement the experiments in "Post-training Quantization for Neural Networks with Provable Guarantees" by Jinjie Zhang, Yixuan Zh…☆11Updated 2 years ago
- DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures☆32Updated 5 years ago
- [ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yinin…☆31Updated last year
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- This repository provides code source used in the paper: A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off☆13Updated 6 years ago
- ☆35Updated 5 years ago
- Official implementation of Neurips 2020 "Sparse Weight Activation Training" paper.☆29Updated 4 years ago
- Repository containing pruned models and related information☆37Updated 4 years ago
- Proximal Mean-field for Neural Network Quantization☆21Updated 5 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆39Updated 3 years ago
- Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules☆43Updated 3 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- Codes for Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?☆31Updated 6 years ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54Updated 5 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- Official PyTorch Implementation of "Learning Architectures for Binary Networks" (ECCV2020)☆26Updated 5 years ago
- ☆69Updated 5 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆90Updated 2 years ago
- ProxQuant: Quantized Neural Networks via Proximal Operators☆30Updated 6 years ago
- Code for the ICLR2020 "Training Binary Neural Networks with Real-to-Binary Convolutions☆34Updated 5 years ago
- ☆43Updated last year
- A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).☆23Updated 3 years ago
- Code for High-Capacity Expert Binary Networks (ICLR 2021).☆27Updated 3 years ago
- Pytorch Implementation using Binary Weighs and activation.Accuracies are comparable .☆45Updated 5 years ago
- Programmable Neural Network Compression☆149Updated 3 years ago
- Reference implementations of popular Binarized Neural Networks☆109Updated 3 weeks ago