[CVPR 2025] CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answering
☆53Jun 16, 2025Updated 10 months ago
Alternatives and similar repositories for CL-MoE
Users that are interested in CL-MoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This is the official repo of MLLM-CL.☆63Oct 10, 2025Updated 6 months ago
- MCITlib: Multimodal Continual Instruction Tuning Library and Benchmark☆76Apr 9, 2026Updated last week
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆19Mar 10, 2025Updated last year
- [ICCV 2025] Official code of paper "Dynamic Multi-Layer Null Space Projection for Vision-Language Continual Learning"☆26Sep 8, 2025Updated 7 months ago
- Official repository for "Unveiling Opinion Evolution via Prompting and Diffusion for Short Video Fake News Detection", ACL Findings 2024.☆15Apr 25, 2025Updated 11 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [CVPR 2025] Lifelong Knowledge Editing for Vision Language Models with Low-Rank Mixture-of-Experts☆23Jun 22, 2025Updated 9 months ago
- Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation Distillation☆12Jul 22, 2024Updated last year
- ☆12Updated this week
- Awsome of VLM-CL. Continual Learning for VLMs: A Survey and Taxonomy Beyond Forgetting☆175Apr 9, 2026Updated last week
- ☆151Mar 27, 2026Updated 2 weeks ago
- CRAI is a multimodal large language model based on the Mixture of Experts (MoE) architecture, supporting text and image cross-modal tasks…☆16Apr 29, 2025Updated 11 months ago
- [ACL'25 Main] Official Implementation of HiDe-LLaVA: Hierarchical Decoupling for Continual Instruction Tuning of Multimodal Large Languag…☆50Feb 16, 2026Updated 2 months ago
- MoE-Visualizer is a tool designed to visualize the selection of experts in Mixture-of-Experts (MoE) models.☆16Apr 8, 2025Updated last year
- The official PyTorch implementation of CVPR2025 paper "Language Guided Concept Bottleneck Models for Interpretable Continual Learning"☆34Jun 17, 2025Updated 9 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆11May 6, 2025Updated 11 months ago
- ☆10May 16, 2025Updated 11 months ago
- Scaling Laws for Mixture of Experts Models☆15Feb 25, 2025Updated last year
- 将训练好的人脸分类器模型文件转换为.pb格式,促进工程应用。☆11Jan 1, 2020Updated 6 years ago
- PyTorch code for the CVPR'23 paper: "ConStruct-VL: Data-Free Continual Structured VL Concepts Learning"☆13Feb 5, 2024Updated 2 years ago
- 大连理工大学编译原理课程设计☆11Jan 1, 2024Updated 2 years ago
- This is the official code for the paper "Reconstruct before Query: Continual Missing Modality Learning with Decomposed Prompt Collaborati…☆12Aug 13, 2024Updated last year
- ☆11Jul 26, 2023Updated 2 years ago
- Code for NeurIPS 2021 paper "Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning".☆16Oct 18, 2021Updated 4 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆15Mar 18, 2026Updated 3 weeks ago
- 基于CNN网络对英文文本进行情感分类,采用tensorflow工具☆10Aug 29, 2018Updated 7 years ago
- Mixture-of-Experts Multimodal Variational Autoencoder☆15Jul 3, 2025Updated 9 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆61Feb 7, 2025Updated last year
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆24Oct 5, 2025Updated 6 months ago
- [ICLR 2025 Oral🔥] SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning☆87Jun 27, 2025Updated 9 months ago
- Official website for TIC-VLA☆40Feb 3, 2026Updated 2 months ago
- The code for "MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking"☆19Jan 25, 2025Updated last year
- Code for paper "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters" CVPR2024☆270Sep 18, 2025Updated 6 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆11Jul 4, 2024Updated last year
- IPO: Interpretable Prompt Optimization for Vision-Language Models(NeurIPS 2024)☆15Mar 4, 2025Updated last year
- Update BasicSR to support PyTorch2.0 DDP training☆17Mar 23, 2023Updated 3 years ago
- Zero-shot RGB-D Point Cloud Registration with Pre-trained Large Vision Model☆17Mar 15, 2025Updated last year
- Multimodal Federated Learning on IoT Data☆11Dec 17, 2023Updated 2 years ago
- Secure and Scalable Federated Learning using Serverless Computing☆12Jan 31, 2024Updated 2 years ago
- Prototyp MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism☆28Apr 4, 2025Updated last year