WuYichen-97 / SD-Lora-CLLinks
[ICLR 2025 Oralπ₯] SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning
β75Updated 7 months ago
Alternatives and similar repositories for SD-Lora-CL
Users that are interested in SD-Lora-CL are comparing it to the libraries listed below
Sorting:
- Instruction Tuning in Continual Learning paradigmβ71Updated 11 months ago
- β150Updated last year
- The official implementation of the CVPR'2024 work Interference-Free Low-Rank Adaptation for Continual Learningβ102Updated 10 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuningβ233Updated last year
- β32Updated 11 months ago
- Code for paper "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters" CVPR2024β269Updated 4 months ago
- [CVPR 2025] CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answeriβ¦β47Updated 7 months ago
- β16Updated last year
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)β53Updated last year
- Awsome of VLM-CL. Continual Learning for VLMs: A Survey and Taxonomy Beyond Forgettingβ145Updated last week
- [ICML2025] Test-Time Learning for Large Language Modelsβ39Updated 5 months ago
- Awesome-Low-Rank-Adaptationβ127Updated last year
- [ICCV 2023] A Unified Continual Learning Framework with General Parameter-Efficient Tuningβ92Updated last year
- A Comprehensive Survey on Continual Learning in Generative Models.β115Updated this week
- Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality (NeurIPS 2023, Spotlight)β90Updated last year
- Code for ICML 2024 paper (Oral) β Test-Time Model Adaptation with Only Forward Passesβ92Updated last year
- β56Updated last year
- [ICLR 2025] COME: Test-time Adaption by Conservatively Minimizing Entropyβ18Updated 10 months ago
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigmβ80Updated 11 months ago
- PyTorch code for the CVPR'23 paper: "CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning"β156Updated 2 years ago
- Multimodal Large Language Model (MLLM) Tuning Survey: Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Modelβ93Updated 5 months ago
- Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Modelsβ106Updated last year
- [ICLR 2025] "Noisy Test-Time Adaptation in Vision-Language Models"β17Updated 11 months ago
- [ICML 2024] Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Modelsβ35Updated last year
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Modelsβ56Updated last year
- ICML 2025 Oral: ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via Ξ±-Ξ²-Divergenceβ40Updated 5 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.β168Updated 7 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Mergingβ76Updated 10 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".β95Updated 9 months ago
- Official implementation of "Unifying Multimodal Large Language Model Capabilities and Modalities via Model Merging".β42Updated 3 months ago