JiuTian-VL / MoMELinks
[NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models
☆77Updated 3 weeks ago
Alternatives and similar repositories for MoME
Users that are interested in MoME are comparing it to the libraries listed below
Sorting:
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆86Updated last year
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆62Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆76Updated 6 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆66Updated 7 months ago
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆42Updated 3 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆203Updated 7 months ago
- [ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆42Updated 2 months ago
- [CVPR 2025 Highlight] Interpreting Object-level Foundation Models via Visual Precision Search☆54Updated last month
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆151Updated 2 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆109Updated 7 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆78Updated last month
- CLIP-MoE: Mixture of Experts for CLIP☆51Updated last year
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆99Updated 6 months ago
- [CVPR 2025] Official PyTorch Code for "MMRL: Multi-Modal Representation Learning for Vision-Language Models" and its extension "MMRL++: P…☆92Updated 6 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆203Updated 6 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆172Updated this week
- Visual self-questioning for large vision-language assistant.☆45Updated 5 months ago
- [ECCV 2024] BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models☆86Updated last year
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆32Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆169Updated last year
- FreeVA: Offline MLLM as Training-Free Video Assistant☆68Updated last year
- 【NeurIPS 2024】Dense Connector for MLLMs☆180Updated last year
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆71Updated last month
- ☆83Updated last year
- Distilling Large Vision-Language Model with Out-of-Distribution Generalizability (ICCV 2023)☆60Updated last year
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆41Updated 2 months ago
- ☆100Updated last year
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆109Updated last year
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆132Updated 5 months ago
- ☆124Updated last year