yuxwind / CBSLinks
Official Code of The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks[ICML2022]
☆17Updated 3 years ago
Alternatives and similar repositories for CBS
Users that are interested in CBS are comparing it to the libraries listed below
Sorting:
- A generic code base for neural network pruning, especially for pruning at initialization.☆31Updated 3 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- ☆28Updated 2 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Updated 3 months ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆33Updated 3 years ago
- Code for reproducing "AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks" (NeurIPS 2021)☆23Updated 4 years ago
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆26Updated last year
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Code for AAAI 2024 paper: CR-SAM: Curvature Regularized Sharpness-Aware Minimization☆12Updated last year
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated 2 years ago
- ☆28Updated 3 years ago
- Official Implementation of the CVPR'23 paper 'Regularization of polynomial networks for image recognition'.☆10Updated 2 years ago
- ☆23Updated 2 years ago
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆16Updated 2 years ago
- Official implementation for "Knowledge Distillation with Refined Logits".☆21Updated last year
- Recent Advances on Efficient Vision Transformers☆55Updated 2 years ago
- [CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong C…☆25Updated 3 years ago
- DataLoader for TinyImageNet Dataset☆12Updated 4 years ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆46Updated 2 years ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆70Updated last year
- PELA: Learning Parameter-Efficient Models with Low-Rank Approximation [CVPR 2024]☆19Updated last year
- ☆23Updated 3 years ago
- [BMVC 2022] Information Theoretic Representation Distillation☆19Updated 2 years ago
- ☆20Updated 5 years ago
- ☆11Updated 2 years ago
- [ICML 2022] "Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness" by Tianlong Chen*, Huan Zhang*, Zhenyu Zhang, Shiyu…☆17Updated 3 years ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated 2 years ago
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆23Updated last year