ECNU-ICALK / CL-MoE
Official Codes for our CVPR paper <CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answering>
☆14Updated 3 weeks ago
Alternatives and similar repositories for CL-MoE:
Users that are interested in CL-MoE are comparing it to the libraries listed below
- ☆16Updated 3 weeks ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆35Updated last year
- Instruction Tuning in Continual Learning paradigm☆47Updated 2 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆34Updated 3 months ago
- ☆27Updated last year
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆30Updated 5 months ago
- ☆47Updated 5 months ago
- Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality (NeurIPS 2023, Spotlight)☆82Updated 5 months ago
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆77Updated last year
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆67Updated last year
- Codes for Merging Large Language Models☆29Updated 8 months ago
- ☆26Updated 3 months ago
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆32Updated 9 months ago
- ☆23Updated 7 months ago
- This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attrib…☆15Updated last month
- ☆174Updated 9 months ago
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆36Updated 2 months ago
- code for Learning the Unlearned: Mitigating Feature Suppression in Contrastive Learning