MingSun-Tse / Awesome-Pruning-at-InitializationLinks
[IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.
☆59Updated last year
Alternatives and similar repositories for Awesome-Pruning-at-Initialization
Users that are interested in Awesome-Pruning-at-Initialization are comparing it to the libraries listed below
Sorting:
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆30Updated 3 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆91Updated 2 years ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Updated 2 years ago
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆28Updated 3 years ago
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang☆28Updated 2 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆33Updated 2 years ago
- In progress.☆64Updated last year
- ☆43Updated last year
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated last year
- ☆30Updated 3 years ago
- [ICLR'21] Neural Pruning via Growing Regularization (PyTorch)☆83Updated 3 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 4 years ago
- Prospect Pruning: Finding Trainable Weights at Initialization Using Meta-Gradients☆31Updated 3 years ago
- [ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen…☆28Updated last year
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆52Updated 3 years ago
- Comparison of method "Pruning at initialization prior to training" (Synflow/SNIP/GraSP) in PyTorch☆16Updated last year
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆52Updated 4 years ago
- [ICML 2021] "Double-Win Quant: Aggressively Winning Robustness of Quantized DeepNeural Networks via Random Precision Training and Inferen…☆14Updated 3 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- ☆14Updated 4 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated 2 years ago
- [ICML 2022] "Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets" by Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wa…☆33Updated 2 years ago
- Lightweight torch implementation of rigl, a sparse-to-sparse optimizer.☆57Updated 3 years ago
- This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".☆43Updated 3 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆105Updated 5 years ago
- Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks☆33Updated 3 years ago
- ☆25Updated 3 years ago
- [ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, S…☆25Updated 3 years ago