WalkerWorldPeace / DOGELinks
Official implementation of "Modeling Multi-Task Model Merging as Adaptive Projective Gradient Descent".
☆21Updated 8 months ago
Alternatives and similar repositories for DOGE
Users that are interested in DOGE are comparing it to the libraries listed below
Sorting:
- Awesome-Low-Rank-Adaptation☆128Updated last year
- [ICML 2024] Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models☆35Updated last year
- Task Singular Vectors: Reducing Task Interference in Model Merging. Merge models avoiding task interference through separable models.☆48Updated last month
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆46Updated last year
- [CVPR '24] Official implementation of the paper "Multiflow: Shifting Towards Task-Agnostic Vision-Language Pruning".☆23Updated 11 months ago
- [ICLR 2025 Oral🔥] SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning☆76Updated 7 months ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆71Updated last year
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆31Updated 8 months ago
- [ICCAD 2025] Squant☆15Updated 7 months ago
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆52Updated last year
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆17Updated 9 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆76Updated 11 months ago
- Elucidated Dataset Condensation (NeurIPS 2024)☆20Updated last year
- Code for ICML 2024 paper (Oral) — Test-Time Model Adaptation with Only Forward Passes☆95Updated last year
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Updated 3 years ago
- ☆63Updated last year
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆53Updated last year
- AAAI2025☆11Updated 9 months ago
- ☆12Updated 6 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆233Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆105Updated last year
- ☆28Updated 2 years ago
- Official implementation of "Mixture of Experts Meets Prompt-Based Continual Learning" (NeurIPS 2024)