didizhu-judy / Model-TailorLinks
[ICML 2024] Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models
☆35Updated last year
Alternatives and similar repositories for Model-Tailor
Users that are interested in Model-Tailor are comparing it to the libraries listed below
Sorting:
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆99Updated last year
- Awesome Low-Rank Adaptation☆59Updated 5 months ago
- ☆55Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆234Updated last year
- [ICLR 2025 Oral🔥] SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning☆75Updated 6 months ago
- Awesome-Low-Rank-Adaptation☆126Updated last year
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆74Updated 10 months ago
- ☆150Updated last year
- The official repository of "Whoever Started the Interference Should End It: Guiding Data-Free Model Merging via Task Vectors""☆44Updated 3 months ago
- OOD Generalization相关文章的阅读笔记☆35Updated last year
- A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning. TPAMI, 2024.☆345Updated last week
- An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.☆22Updated 10 months ago
- Instruction Tuning in Continual Learning paradigm☆70Updated 11 months ago
- A Comprehensive Survey on Continual Learning in Generative Models.☆109Updated last week
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆30Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated 3 weeks ago
- Multimodal Large Language Model (MLLM) Tuning Survey: Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Model☆91Updated 5 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆24Updated last year
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆32Updated 3 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆201Updated last year
- Official implementation of "Modeling Multi-Task Model Merging as Adaptive Projective Gradient Descent".☆21Updated 7 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. ACM Computing Surveys, 2025.☆638Updated this week
- MokA: Multimodal Low-Rank Adaptation for MLLMs☆66Updated 2 weeks ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆105Updated last year
- This repository collects awesome survey, resource, and paper for Lifelong Learning for Large Language Models. (Updated Regularly)☆68Updated 7 months ago
- A curated list of Model Merging methods.☆95Updated last month
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆57Updated 8 months ago
- ☆14Updated 8 months ago
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆123Updated 6 months ago
- 🔥 【Meta Awesome List】: AI/ML Research Hub - Solving the "Chasing Hot Topics" Problem for AI Researchers. 🤖 Agent-driven intelligence au…☆58Updated 4 months ago