yuxwind / CBSLinks
Official Code of The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks[ICML2022]
☆17Updated 3 years ago
Alternatives and similar repositories for CBS
Users that are interested in CBS are comparing it to the libraries listed below
Sorting:
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- Code for reproducing "AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks" (NeurIPS 2021)☆23Updated 4 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆31Updated 3 years ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆33Updated 4 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated 2 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- Code for AAAI 2024 paper: CR-SAM: Curvature Regularized Sharpness-Aware Minimization☆13Updated last year
- ☆28Updated 2 years ago
- ☆28Updated 3 years ago
- Official implementation for "Knowledge Distillation with Refined Logits".☆21Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Updated 4 months ago
- ☆11Updated 2 years ago
- ☆23Updated 3 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆26Updated last year
- ☆23Updated 6 years ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆47Updated 2 years ago
- ☆48Updated 2 years ago
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆14Updated 3 years ago
- ☆24Updated 2 years ago
- [NeurIPS 2024] Search for Efficient LLMs☆16Updated last year
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated 2 years ago
- [ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, S…☆26Updated 4 years ago
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆16Updated 2 years ago
- You Only Condense Once: Two Rules for Pruning Condensed Datasets (NeurIPS 2023)☆15Updated 2 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆77Updated 3 years ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆82Updated 10 months ago
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆37Updated last year
- [ICLR'21] Neural Pruning via Growing Regularization (PyTorch)☆82Updated 4 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆23Updated 2 years ago