GATECH-EIC / Early-Bird-Tickets
[ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks
☆138Updated 4 years ago
Alternatives and similar repositories for Early-Bird-Tickets:
Users that are interested in Early-Bird-Tickets are comparing it to the libraries listed below
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆103Updated 5 years ago
- ☆144Updated 2 years ago
- ☆70Updated 5 years ago
- ☆226Updated 9 months ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆89Updated 2 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- ☆191Updated 4 years ago
- ☆67Updated 4 years ago
- Pytorch implementation of the paper "SNIP: Single-shot Network Pruning based on Connection Sensitivity" by Lee et al.☆107Updated 6 years ago
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆137Updated 5 years ago
- Code for the paper "Training Binary Neural Networks with Bayesian Learning Rule☆38Updated 3 years ago
- SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY☆113Updated 5 years ago
- [ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, …☆168Updated 3 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- [ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yinin…☆30Updated last year
- SNIP: SINGLE-SHOT NETWORK PRUNING☆30Updated last month
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆321Updated 2 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆50Updated 4 years ago
- Neural Architecture Transfer (Arxiv'20), PyTorch Implementation☆156Updated 4 years ago
- code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"☆104Updated 3 years ago
- "Layer-wise Adaptive Rate Scaling" in PyTorch☆86Updated 4 years ago
- A research library for pytorch-based neural network pruning, compression, and more.☆161Updated 2 years ago
- ☆53Updated 6 years ago
- Code release for paper "Random Search and Reproducibility for NAS"☆167Updated 5 years ago
- Code release for "Adversarial Robustness vs Model Compression, or Both?"☆91Updated 3 years ago
- ☆13Updated 3 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆52Updated 3 years ago
- Code for Neural Architecture Search without Training (ICML 2021)☆465Updated 3 years ago
- Zero-Cost Proxies for Lightweight NAS☆151Updated 2 years ago
- Using ideas from product quantization for state-of-the-art neural network compression.☆146Updated 3 years ago