musco-ai / musco-pytorchLinks
MUSCO: MUlti-Stage COmpression of neural networks
☆72Updated 4 years ago
Alternatives and similar repositories for musco-pytorch
Users that are interested in musco-pytorch are comparing it to the libraries listed below
Sorting:
- FLOPs and other statistics COunter for Pytorch neural networks☆23Updated 4 years ago
- Repository to track the progress in model compression and acceleration☆105Updated 3 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- A Pytorch implementation of Neural Network Compression (pruning, deep compression, channel pruning)☆154Updated 10 months ago
- Using ideas from product quantization for state-of-the-art neural network compression.☆145Updated 3 years ago
- DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures☆33Updated 4 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆90Updated 2 years ago
- PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)☆126Updated 6 years ago
- Structured Bayesian Pruning, NIPS 2017☆74Updated 4 years ago
- Repository containing pruned models and related information☆37Updated 4 years ago
- Class Project for 18663 - Implementation of FBNet (Hardware-Aware DNAS)☆34Updated 5 years ago
- SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY☆114Updated 5 years ago
- Prune DNN using Alternating Direction Method of Multipliers (ADMM)☆108Updated 4 years ago
- AdaShift optimizer implementation in PyTorch☆17Updated 6 years ago
- ProxQuant: Quantized Neural Networks via Proximal Operators☆29Updated 6 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆104Updated 5 years ago
- A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"☆167Updated 5 years ago
- Pytorch version for weight pruning for Murata Group's CREST project☆57Updated 7 years ago
- This is th code to FAT method with links to quantized tflite models. (CC BY-NC-ND)☆19Updated 6 years ago
- ☆35Updated 5 years ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54Updated 5 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- Codes for Layer-wise Optimal Brain Surgeon☆77Updated 6 years ago
- Model compression by constrained optimization, using the Learning-Compression (LC) algorithm☆73Updated 3 years ago
- Proximal Mean-field for Neural Network Quantization☆22Updated 5 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆52Updated 3 years ago
- All about acceleration and compression of Deep Neural Networks☆33Updated 5 years ago
- ☆83Updated 5 years ago
- Training wide residual networks for deployment using a single bit for each weight - Official Code Repository for ICLR 2018 Published Pape…☆36Updated 5 years ago
- Code release for "Adversarial Robustness vs Model Compression, or Both?"☆91Updated 3 years ago