didizhu-zju / Model-TailorLinks
☆27Updated last year
Alternatives and similar repositories for Model-Tailor
Users that are interested in Model-Tailor are comparing it to the libraries listed below
Sorting:
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆88Updated 10 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆222Updated 8 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆67Updated 6 months ago
- Awesome Low-Rank Adaptation☆43Updated 3 weeks ago
- ☆49Updated 9 months ago
- Awesome-Low-Rank-Adaptation☆115Updated 10 months ago
- A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning. TPAMI, 2024.☆318Updated 2 weeks ago
- [ICLR 2025 Oral🔥] SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning☆55Updated 2 months ago
- This repository collects awesome survey, resource, and paper for Lifelong Learning for Large Language Models. (Updated Regularly)☆60Updated 3 months ago
- ☆141Updated 8 months ago
- An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.☆17Updated 6 months ago
- Instruction Tuning in Continual Learning paradigm☆58Updated 6 months ago
- ☆31Updated 11 months ago
- Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality (NeurIPS 2023, Spotlight)☆86Updated 9 months ago
- ☆21Updated 5 months ago
- ☆12Updated 4 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆510Updated this week
- ☆16Updated 9 months ago
- OOD Generalization相关文章的阅读笔记☆31Updated 8 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆49Updated 10 months ago
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆121Updated last month
- ☆10Updated last year
- A paper list of our recent survey on continual learning, and other useful resources in this field.☆87Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆180Updated last year
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆110Updated last month
- [CVPR2024 highlight] Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching (G-VBSM)☆28Updated 10 months ago
- ☆51Updated 2 months ago
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆23Updated 6 months ago
- [ICLR 2025] "Noisy Test-Time Adaptation in Vision-Language Models"☆16Updated 6 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆29Updated 5 months ago