didizhu-judy / Model-TailorLinks
[ICML 2024] Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models
☆33Updated last year
Alternatives and similar repositories for Model-Tailor
Users that are interested in Model-Tailor are comparing it to the libraries listed below
Sorting:
- Awesome Low-Rank Adaptation☆58Updated 4 months ago
- ☆55Updated last year
- OOD Generalization相关文章的阅读笔记☆36Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆97Updated last year
- Instruction Tuning in Continual Learning paradigm☆66Updated 10 months ago
- A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning. TPAMI, 2024.☆344Updated last month
- [ICLR 2025 Oral🔥] SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning☆74Updated 6 months ago
- Awesome-Low-Rank-Adaptation☆124Updated last year
- An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.☆22Updated 10 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆234Updated last year
- A Comprehensive Survey on Continual Learning in Generative Models.☆102Updated last month
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆74Updated 9 months ago
- ☆148Updated last year
- 🔥 【Meta Awesome List】: AI/ML Research Hub - Solving the "Chasing Hot Topics" Problem for AI Researchers. 🤖 Agent-driven intelligence au…☆58Updated 3 months ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆30Updated last year
- [CVPR2024 highlight] Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching (G-VBSM)☆28Updated last year
- [ICLR 2025] COME: Test-time Adaption by Conservatively Minimizing Entropy☆17Updated 9 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆24Updated last year
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆25Updated 10 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆33Updated 9 months ago
- ☆59Updated 5 months ago
- FusionBench: A Comprehensive Benchmark/Toolkit of Deep Model Fusion☆192Updated last week
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆31Updated 3 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆627Updated this week
- Multimodal Large Language Model (MLLM) Tuning Survey: Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Model☆90Updated 4 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆92Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆105Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated this week
- ICML 2025 Oral: ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via α-β-Divergence☆41Updated 4 months ago
- The official repository of "Whoever Started the Interference Should End It: Guiding Data-Free Model Merging via Task Vectors""☆40Updated 2 months ago