IST-DASLab / M-FAC
Efficient reference implementations of the static & dynamic M-FAC algorithms (for pruning and optimization)
☆16Updated 2 years ago
Related projects: ⓘ
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆45Updated 3 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆97Updated 4 years ago
- ☆42Updated 4 years ago
- Block Sparse movement pruning☆77Updated 3 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆88Updated last year
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆54Updated 2 years ago
- ☆70Updated 4 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆70Updated 3 years ago
- SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY☆108Updated 5 years ago
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆37Updated 3 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 5 years ago
- Implementation of (overlap) local SGD in Pytorch☆32Updated 4 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆140Updated 2 weeks ago
- Distributed K-FAC Preconditioner for PyTorch☆75Updated last week
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆46Updated 10 months ago
- Pytorch library for factorized L0-based pruning.☆42Updated 11 months ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆43Updated 3 years ago
- Pytorch implementation of the paper "SNIP: Single-shot Network Pruning based on Connection Sensitivity" by Lee et al.☆101Updated 5 years ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆39Updated last year
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆57Updated 11 months ago
- Block-sparse primitives for PyTorch☆147Updated 3 years ago
- ☆39Updated last year
- ☆42Updated 7 months ago
- ☆72Updated 5 years ago
- Reproduction and analysis of SNIP paper☆28Updated 4 years ago
- ☆187Updated last year
- Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks☆32Updated 2 years ago
- ☆35Updated 3 years ago
- Compressing Neural Networks using the Variational Information Bottleneck☆63Updated 2 years ago
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang☆23Updated 2 years ago