stevenygd / SWALPLinks
Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".
☆62Updated 6 years ago
Alternatives and similar repositories for SWALP
Users that are interested in SWALP are comparing it to the libraries listed below
Sorting:
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆104Updated 5 years ago
- ☆70Updated 5 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆52Updated 4 years ago
- PyTorch implementation of HashedNets☆36Updated 2 years ago
- This is a PyTorch implementation of the Scalpel. Node pruning for five benchmark networks and SIMD-aware weight pruning for LeNet-300-100…☆41Updated 6 years ago
- DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures☆33Updated 4 years ago
- PyProf2: PyTorch Profiling tool☆82Updated 4 years ago
- Code for BlockSwap (ICLR 2020).☆33Updated 4 years ago
- This repository is no longer maintained. Check☆81Updated 5 years ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54Updated 5 years ago
- ☆83Updated 5 years ago
- Implementation of ICLR 2017 paper "Loss-aware Binarization of Deep Networks"☆18Updated 6 years ago
- [ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yinin…☆31Updated last year
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆39Updated 3 years ago
- ☆74Updated 5 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 5 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆52Updated 3 years ago
- Implementation of ICLR 2018 paper "Loss-aware Weight Quantization of Deep Networks"☆26Updated 5 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆90Updated 2 years ago
- All about acceleration and compression of Deep Neural Networks☆33Updated 5 years ago
- ☆53Updated 6 years ago
- Proximal Mean-field for Neural Network Quantization☆22Updated 5 years ago
- A Re-implementation of Fixed-update Initialization☆153Updated 5 years ago
- ☆23Updated 6 years ago
- SelectiveBackprop accelerates training by dynamically prioritizing useful examples with high loss☆32Updated 5 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- ProxQuant: Quantized Neural Networks via Proximal Operators☆29Updated 6 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 4 years ago
- Training wide residual networks for deployment using a single bit for each weight - Official Code Repository for ICLR 2018 Published Pape…☆36Updated 5 years ago
- Code base for SRSGD.☆28Updated 5 years ago