IST-DASLab / M-FACLinks
Efficient reference implementations of the static & dynamic M-FAC algorithms (for pruning and optimization)
☆17Updated 3 years ago
Alternatives and similar repositories for M-FAC
Users that are interested in M-FAC are comparing it to the libraries listed below
Sorting:
- Block Sparse movement pruning☆80Updated 4 years ago
- ☆42Updated 2 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆52Updated 4 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆73Updated 4 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆105Updated 5 years ago
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆38Updated 4 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆147Updated 7 months ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆91Updated 2 years ago
- ☆46Updated 5 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆58Updated 3 years ago
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Updated last year
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated last year
- ☆208Updated 2 years ago
- Implementation of (overlap) local SGD in Pytorch☆33Updated 4 years ago
- Efficient LLM Inference Acceleration using Prompting☆48Updated 8 months ago
- SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY☆114Updated 5 years ago
- ☆57Updated last year
- Lightweight torch implementation of rigl, a sparse-to-sparse optimizer.☆57Updated 3 years ago
- code for the paper "A Statistical Framework for Low-bitwidth Training of Deep Neural Networks"☆28Updated 4 years ago
- ☆70Updated 5 years ago
- Code release for "Adversarial Robustness vs Model Compression, or Both?"☆91Updated 4 years ago
- Pytorch implementation of the paper "SNIP: Single-shot Network Pruning based on Connection Sensitivity" by Lee et al.☆108Updated 6 years ago
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆140Updated 3 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 4 years ago
- Generic Neural Architecture Search via Regression (NeurIPS'21 Spotlight)☆36Updated 2 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆28Updated 3 years ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆92Updated 2 years ago
- ☆74Updated 6 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆122Updated last year