End-to-end training of sparse deep neural networks with little-to-no performance loss.
☆335Jan 26, 2023Updated 3 years ago
Alternatives and similar repositories for rigl
Users that are interested in rigl are comparing it to the libraries listed below
Sorting:
- Lightweight torch implementation of rigl, a sparse-to-sparse optimizer.☆60Nov 17, 2021Updated 4 years ago
- Sparse learning library and sparse momentum resources.☆385Jun 6, 2022Updated 3 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Nov 11, 2023Updated 2 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆91Feb 15, 2023Updated 3 years ago
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆29Jan 6, 2022Updated 4 years ago
- A repository in preparation for open-sourcing lottery ticket hypothesis code.☆636Sep 6, 2022Updated 3 years ago
- SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY☆115Jun 26, 2019Updated 6 years ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Feb 11, 2023Updated 3 years ago
- ☆153May 25, 2020Updated 5 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆17Mar 16, 2022Updated 4 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Jan 5, 2021Updated 5 years ago
- Fast Block Sparse Matrices for Pytorch☆549Jan 21, 2021Updated 5 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Nov 8, 2020Updated 5 years ago
- Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.☆1,538Jul 30, 2024Updated last year
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆105Feb 18, 2020Updated 6 years ago
- Pytorch implementation of the paper "SNIP: Single-shot Network Pruning based on Connection Sensitivity" by Lee et al.☆109Apr 23, 2019Updated 6 years ago
- [ICML 2022] "Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets" by Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wa…☆33Apr 9, 2023Updated 2 years ago
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang☆30Jul 24, 2022Updated 3 years ago
- [ECMLPKDD 2020] "Topological Insights into Sparse Neural Networks"☆13May 2, 2022Updated 3 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Dec 1, 2023Updated 2 years ago
- [IJCAI 2022] "Dynamic Sparse Training for Deep Reinforcement Learning" by Ghada Sokar, Elena Mocanu , Decebal Constantin Mocanu, Mykola P…☆15May 13, 2022Updated 3 years ago
- Fast and Easy Infinite Neural Networks in Python☆2,377Mar 1, 2024Updated 2 years ago
- Bibtex for Sparsity in Deep Learning paper (https://arxiv.org/abs/2102.00554) - open for pull requests☆46May 4, 2022Updated 3 years ago
- Code for "Fast Sparse ConvNets" CVPR2020 submissions☆12Nov 20, 2019Updated 6 years ago
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆138Mar 5, 2026Updated 2 weeks ago
- FasterAI: A repository for making smaller and faster models with the FastAI library.☆36Mar 4, 2024Updated 2 years ago
- Swish Activation - PyTorch CUDA Implementation☆37Oct 10, 2019Updated 6 years ago
- ☆21Mar 15, 2023Updated 3 years ago
- ☆623Updated this week
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆14Sep 6, 2022Updated 3 years ago
- ☆223Feb 21, 2023Updated 3 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆51Feb 24, 2022Updated 4 years ago
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,516Jun 7, 2020Updated 5 years ago
- ☆13Mar 8, 2020Updated 6 years ago
- Artifacts of VLDB'22 paper "COMET: A Novel Memory-Efficient Deep Learning TrainingFramework by Using Error-Bounded Lossy Compression"☆10Aug 2, 2022Updated 3 years ago
- Code for ICML 2021 submission☆35Mar 24, 2021Updated 4 years ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆321Nov 3, 2019Updated 6 years ago
- A reimplementation of "The Lottery Ticket Hypothesis" (Frankle and Carbin) on MNIST.☆725Jul 27, 2020Updated 5 years ago
- "Learning Rate Dropout" in PyTorch☆34Dec 6, 2019Updated 6 years ago