jfainberg / hashed_netsLinks
PyTorch implementation of HashedNets
☆38Updated 2 years ago
Alternatives and similar repositories for hashed_nets
Users that are interested in hashed_nets are comparing it to the libraries listed below
Sorting:
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- ☆69Updated 5 years ago
- Butterfly matrix multiplication in PyTorch☆178Updated 2 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆47Updated 5 years ago
- A Re-implementation of Fixed-update Initialization☆156Updated 6 years ago
- [JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion☆41Updated 4 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆91Updated 2 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆53Updated 4 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆51Updated 3 years ago
- ☆145Updated 2 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 6 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆105Updated 5 years ago
- Piecewise Linear Functions (PWL) implementation in PyTorch☆57Updated 3 years ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆140Updated 5 years ago
- Structured matrices for compressing neural networks☆67Updated 2 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 5 years ago
- DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures☆32Updated 5 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 5 years ago
- Programmable Neural Network Compression☆149Updated 3 years ago
- Code base for SRSGD.☆28Updated 5 years ago
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆136Updated 3 weeks ago
- SelectiveBackprop accelerates training by dynamically prioritizing useful examples with high loss☆32Updated 5 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆75Updated 6 years ago
- A research library for pytorch-based neural network pruning, compression, and more.☆162Updated 3 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆40Updated 4 years ago
- This repository is no longer maintained. Check☆81Updated 5 years ago
- Delta Orthogonal Initialization for PyTorch☆18Updated 7 years ago
- Reparameterize your PyTorch modules☆71Updated 5 years ago
- All about acceleration and compression of Deep Neural Networks☆33Updated 6 years ago
- Official implementation of Neurips 2020 "Sparse Weight Activation Training" paper.☆29Updated 4 years ago