jfainberg / hashed_nets
PyTorch implementation of HashedNets
☆36Updated last year
Alternatives and similar repositories for hashed_nets:
Users that are interested in hashed_nets are comparing it to the libraries listed below
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆48Updated 3 years ago
- Code base for SRSGD.☆28Updated 4 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- Delta Orthogonal Initialization for PyTorch☆18Updated 6 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆43Updated 4 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆88Updated last year
- ☆22Updated 6 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆102Updated 4 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆49Updated 2 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 4 years ago
- [ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yinin…☆30Updated 10 months ago
- [JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion☆40Updated 3 years ago
- ☆34Updated 5 years ago
- This repository is no longer maintained. Check☆81Updated 4 years ago
- Code for BlockSwap (ICLR 2020).☆33Updated 3 years ago
- Automatic learning-rate scheduler☆44Updated 3 years ago
- ☆14Updated 3 years ago
- ☆70Updated 4 years ago
- Efficient Riemannian Optimization on Stiefel Manifold via Cayley Transform☆37Updated 5 years ago
- DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures☆31Updated 4 years ago
- Reparameterize your PyTorch modules☆70Updated 4 years ago
- ☆14Updated 4 years ago
- Offical Repo for Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks. Accepted by Neurips 2020.☆31Updated 4 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆37Updated 3 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆16Updated 4 years ago
- Official PyTorch implementation of "Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets" (ICLR 2021)