yuecao0119 / MMFuserLinks
The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". MMFuser addresses the limitations of current MLLMs in capturing complex image details by simply yet efficiently integrating multi-layer features from ViTs.
☆59Updated 11 months ago
Alternatives and similar repositories for MMFuser
Users that are interested in MMFuser are comparing it to the libraries listed below
Sorting:
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆103Updated 4 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆87Updated 3 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆59Updated 11 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆44Updated 10 months ago
- ☆119Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆48Updated last year
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆40Updated this week
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆23Updated 6 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆72Updated 5 months ago
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆50Updated last year
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆31Updated 3 months ago
- ☆53Updated 9 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆84Updated last year
- Visual self-questioning for large vision-language assistant.☆45Updated 3 months ago
- [ICCV 2025] Dynamic-VLM☆25Updated 10 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆20Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆166Updated last year
- [ECCV 2024] FlexAttention for Efficient High-Resolution Vision-Language Models☆44Updated 9 months ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆35Updated 6 months ago
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆58Updated last month
- FreeVA: Offline MLLM as Training-Free Video Assistant☆64Updated last year
- Official repository of "Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach" (ACL 2024 Oral)☆32Updated 7 months ago
- Official implementation of CVPR 2024 paper "Retrieval-Augmented Open-Vocabulary Object Detection".☆44Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 5 months ago
- ☆32Updated last year
- 【NeurIPS 2024】Dense Connector for MLLMs☆179Updated last year
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆40Updated 3 months ago
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆22Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆88Updated 4 months ago
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆50Updated 2 months ago