leo-yangli / l0-arm
Code for L0-ARM: Network Sparsification via Stochastic Binary Optimization
☆15Updated 5 years ago
Alternatives and similar repositories for l0-arm:
Users that are interested in l0-arm are comparing it to the libraries listed below
- Compressing Neural Networks using the Variational Information Bottleneck☆66Updated 2 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆103Updated 5 years ago
- Codebase for the paper "A Gradient Flow Framework for Analyzing Network Pruning"☆21Updated 4 years ago
- ☆89Updated 3 years ago
- ☆45Updated 5 years ago
- This repository contains code to replicate the experiments given in NeurIPS 2019 paper "One ticket to win them all: generalizing lottery …☆51Updated 9 months ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆49Updated 4 years ago
- Low-variance, efficient and unbiased gradient estimation for optimizing models with binary latent variables. (ICLR 2019)☆28Updated 6 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆38Updated 3 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 4 years ago
- ☆58Updated 2 years ago
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆31Updated 3 years ago
- Towards increasing stability of neural networks for continual learning: https://arxiv.org/abs/2006.06958.pdf (NeurIPS'20)☆75Updated 2 years ago
- ☆53Updated 6 years ago
- Learning To Stop While Learning To Predict☆34Updated 2 years ago
- ☆40Updated 5 years ago
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆31Updated 4 years ago
- [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“☆24Updated 3 years ago
- ☆30Updated 4 years ago
- SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning☆23Updated 6 years ago
- [ICML 2021 Oral] "CATE: Computation-aware Neural Architecture Encoding with Transformers" by Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang☆19Updated 3 years ago
- SNIP: SINGLE-SHOT NETWORK PRUNING☆30Updated last month
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- A Closer Look at Accuracy vs. Robustness☆88Updated 3 years ago
- [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 5 years ago
- The Full Spectrum of Deepnet Hessians at Scale: Dynamics with SGD Training and Sample Size☆17Updated 5 years ago
- ICML 2020, Estimating Generalization under Distribution Shifts via Domain-Invariant Representations☆23Updated 4 years ago
- Code for testing DCT plus Sparse (DCTpS) networks☆14Updated 3 years ago
- Computing various measures and generalization bounds on convolutional and fully connected networks☆35Updated 6 years ago
- Official code for "In Search of Robust Measures of Generalization" (NeurIPS 2020)☆28Updated 4 years ago