AFeng-x / Draw-and-UnderstandLinks
[ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want
☆88Updated 3 months ago
Alternatives and similar repositories for Draw-and-Understand
Users that are interested in Draw-and-Understand are comparing it to the libraries listed below
Sorting:
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆139Updated 2 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆166Updated last year
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated 11 months ago
- The official implementation of RAR☆92Updated last year
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆93Updated 8 months ago
- ☆119Updated last year
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆49Updated 9 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆104Updated 4 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆155Updated 9 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆157Updated last year
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆81Updated 11 months ago
- Official implement of MIA-DPO☆66Updated 8 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆87Updated 2 months ago
- ☆76Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 4 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆192Updated 2 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆73Updated last year
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated last year
- ☆32Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated last year
- ☆122Updated 6 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆64Updated 2 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆177Updated 11 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆45Updated 8 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆40Updated 6 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆53Updated 3 months ago
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆37Updated 3 months ago
- ☆25Updated last year
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆108Updated 3 months ago