leo-yangli / l0-armLinks
Code for L0-ARM: Network Sparsification via Stochastic Binary Optimization
☆15Updated 6 years ago
Alternatives and similar repositories for l0-arm
Users that are interested in l0-arm are comparing it to the libraries listed below
Sorting:
- Compressing Neural Networks using the Variational Information Bottleneck☆66Updated 3 years ago
- Single shot neural network pruning before training the model, based on connection sensitivity☆11Updated 6 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆105Updated 5 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆53Updated 4 years ago
- This repository contains code to replicate the experiments given in NeurIPS 2019 paper "One ticket to win them all: generalizing lottery …☆50Updated last year
- ICML 2020, Estimating Generalization under Distribution Shifts via Domain-Invariant Representations☆23Updated 5 years ago
- Bibtex for Sparsity in Deep Learning paper (https://arxiv.org/abs/2102.00554) - open for pull requests☆46Updated 3 years ago
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆30Updated 5 years ago
- Code for "Supermasks in Superposition"☆124Updated 2 years ago
- Code for the paper "Understanding Generalization through Visualizations"☆65Updated 4 years ago
- This repository contains the code for our recent paper `Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters'☆22Updated 7 years ago
- Towards increasing stability of neural networks for continual learning: https://arxiv.org/abs/2006.06958.pdf (NeurIPS'20)☆76Updated 2 years ago
- Mode Connectivity and Fast Geometric Ensembles in PyTorch☆281Updated 3 years ago
- Codebase for the paper "A Gradient Flow Framework for Analyzing Network Pruning"☆20Updated 4 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 5 years ago
- Official code for ICLR 2020 paper "A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning."☆101Updated 5 years ago
- ☆83Updated 5 years ago
- ☆93Updated 3 years ago
- Lookahead: A Far-sighted Alternative of Magnitude-based Pruning (ICLR 2020)☆32Updated 5 years ago
- ☆55Updated 5 years ago
- ☆59Updated 2 years ago
- ☆46Updated 6 years ago
- Low-variance, efficient and unbiased gradient estimation for optimizing models with binary latent variables. (ICLR 2019)☆27Updated 6 years ago
- [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“☆24Updated 3 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 5 years ago
- A Closer Look at Accuracy vs. Robustness☆88Updated 4 years ago
- Winning Solution of the NeurIPS 2020 Competition on Predicting Generalization in Deep Learning☆41Updated 4 years ago
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 5 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- Code accompanying our paper "Finding trainable sparse networks through Neural Tangent Transfer" to be published at ICML-2020.☆13Updated 5 years ago