haiquanlu / AlphaPruningLinks
[NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
☆27Updated 4 months ago
Alternatives and similar repositories for AlphaPruning
Users that are interested in AlphaPruning are comparing it to the libraries listed below
Sorting:
- Data distillation benchmark☆68Updated 3 months ago
- Awesome-Low-Rank-Adaptation☆117Updated 11 months ago
- Official Pytorch Implementation of "Outlier-weighed Layerwise Sampling for LLM Fine-tuning" by Pengxiang Li, Lu Yin, Xiaowei Gao, Shiwei …☆34Updated 4 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆69Updated 7 months ago
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆52Updated 8 months ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆35Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆45Updated last year
- Elucidated Dataset Condensation (NeurIPS 2024)☆21Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆100Updated last year
- ☆59Updated 9 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆50Updated last year
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆74Updated 7 months ago
- A curated list of Model Merging methods.☆92Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆83Updated last year
- [CVPR2024 highlight] Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching (G-VBSM)☆29Updated last year
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆22Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆154Updated 3 months ago
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆105Updated 4 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆90Updated 11 months ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆129Updated 10 months ago
- ☆28Updated last year
- Task Singular Vectors: Reducing Task Interference in Model Merging. Merge models avoiding task interference through separable models.☆33Updated 2 months ago
- ☆14Updated 2 years ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆126Updated 3 months ago
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆101Updated 3 months ago
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆177Updated 9 months ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆29Updated last year
- Official repo of M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning☆27Updated 6 months ago
- [NeurIPS 2025] VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆53Updated 2 weeks ago
- ☆16Updated 11 months ago