Bibtex for Sparsity in Deep Learning paper (https://arxiv.org/abs/2102.00554) - open for pull requests
☆46May 4, 2022Updated 3 years ago
Alternatives and similar repositories for sparsity-in-deep-learning
Users that are interested in sparsity-in-deep-learning are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICML 2022] "Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets" by Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wa…☆33Apr 9, 2023Updated 2 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Nov 11, 2023Updated 2 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆91Feb 15, 2023Updated 3 years ago
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆29Jan 6, 2022Updated 4 years ago
- PyTorch implementation of LARS (Layer-wise Adaptive Rate Scaling)☆19May 11, 2019Updated 6 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Apr 28, 2023Updated 2 years ago
- Official pytorch code for "APP: Anytime Progressive Pruning" (DyNN @ ICML, 2022; CLL @ ACML, 2022, SNN @ ICML, 2022 and SlowDNN 2023)☆16Nov 22, 2022Updated 3 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆17Mar 16, 2022Updated 4 years ago
- Code for ICML 2022 paper "SPDY: Accurate Pruning with Speedup Guarantees"☆20May 3, 2023Updated 2 years ago
- ☆38Nov 13, 2020Updated 5 years ago
- A branch predictor simulator in C++ that tests 6 different types of branch predictors.☆13Apr 26, 2018Updated 7 years ago
- Code for ICML 2021 submission☆35Mar 24, 2021Updated 5 years ago
- Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning