YiteWang / NTK-SAP
[ICLR2023] NTK-SAP: Improving neural network pruning by aligning training dynamics
☆18Updated last year
Alternatives and similar repositories for NTK-SAP:
Users that are interested in NTK-SAP are comparing it to the libraries listed below
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆30Updated 4 months ago
- Deep Learning & Information Bottleneck☆58Updated last year
- ☆13Updated 2 years ago
- [ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, S…☆25Updated 3 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆52Updated 5 months ago
- Prospect Pruning: Finding Trainable Weights at Initialization Using Meta-Gradients☆31Updated 3 years ago
- Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection☆21Updated 4 years ago
- ☆11Updated 2 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 4 years ago
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang☆27Updated 2 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated 2 years ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆14Updated last month
- This is the official implementation of the ICML 2023 paper - Can Forward Gradient Match Backpropagation ?☆12Updated last year
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- ☆57Updated 2 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆46Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- [ICLR '21] Interpretable Neural Architecture Search using Bayesian Optimisation with Weisfiler-Lehman Kernel (NAS-BOWL)☆24Updated 3 years ago
- Recycling diverse models☆44Updated 2 years ago
- ☆23Updated 2 years ago
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆24Updated last year
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 9 months ago
- ☆21Updated 2 years ago
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆28Updated 3 years ago
- [ICLR 2023] 'Revisiting Pruning At Initialization Through The Lens of Ramanujan Graph" by Duc Hoang, Shiwei Liu, Radu Marculescu, Atlas W…☆12Updated last year