ECNU-ICALK / CL-MoELinks
[CVPR 2025] CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answering
☆44Updated 6 months ago
Alternatives and similar repositories for CL-MoE
Users that are interested in CL-MoE are comparing it to the libraries listed below
Sorting:
- Code for our NeurIPS´24 paper☆38Updated last year
- Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models☆106Updated last year
- Instruction Tuning in Continual Learning paradigm☆68Updated 11 months ago
- Awsome of VLM-CL. Continual Learning for VLMs: A Survey and Taxonomy Beyond Forgetting☆134Updated 2 weeks ago
- ☆150Updated last year
- ☆32Updated 10 months ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆56Updated last year
- Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality (NeurIPS 2023, Spotlight)☆90Updated last year
- Consistent Prompting for Rehearsal-Free Continual Learning [CVPR2024]☆36Updated 6 months ago
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆203Updated 3 years ago
- Learning without Forgetting for Vision-Language Models (TPAMI 2025)☆55Updated 6 months ago
- [NeurIPS 2024] Code for Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models☆45Updated 9 months ago
- [ICLR 2024 Spotlight] "Negative Label Guided OOD Detection with Pretrained Vision-Language Models"☆21Updated last year
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆84Updated last year
- Exploring prompt tuning with pseudolabels for multiple modalities, learning settings, and training strategies.☆51Updated last year
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆85Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆95Updated 8 months ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆112Updated last year
- ☆10Updated 8 months ago
- The official implementation of the CVPR'2024 work Interference-Free Low-Rank Adaptation for Continual Learning☆101Updated 9 months ago
- [ICLR 2025] Official Implementation of Local-Prompt: Extensible Local Prompts for Few-Shot Out-of-Distribution Detection☆49Updated 5 months ago
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864☆69Updated 2 years ago
- [ACM Computing Survey 2025] Recent Advances of Foundation Language Models-based Continual Learning: A Survey☆23Updated 3 months ago
- [CVPR2025] The implementation of the paper "OODD: Test-time Out-of-Distribution Detection with Dynamic Dictionary".☆18Updated 8 months ago
- [ICLR 2024] ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation☆71Updated last year
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆103Updated last year
- ☆22Updated 3 weeks ago
- Official Repository for ICML 2024 Paper "OT-CLIP: Understanding and Generalizing CLIP via Optimal Transport"☆20Updated last month
- ☆105Updated 2 years ago
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆47Updated 9 months ago