AFeng-x / Draw-and-UnderstandLinks
[ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want
☆74Updated 4 months ago
Alternatives and similar repositories for Draw-and-Understand
Users that are interested in Draw-and-Understand are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆156Updated 8 months ago
- ☆115Updated 10 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆34Updated 5 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆75Updated 7 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆163Updated last week
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆145Updated 5 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆105Updated 2 months ago
- ☆25Updated last year
- The official implementation of RAR☆88Updated last year
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆172Updated this week
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆45Updated 2 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆156Updated 8 months ago
- ☆84Updated 2 months ago
- Official implement of MIA-DPO☆58Updated 4 months ago
- ☆58Updated last year
- Official repo for StableLLAVA☆95Updated last year
- ☆81Updated last month
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆60Updated 10 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆90Updated 4 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆60Updated 11 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆139Updated 6 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆90Updated this week
- ☆91Updated last year
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 2 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆74Updated 8 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆37Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆82Updated last month
- Envolving Temporal Reasoning Capability into LMMs via Temporal Consistent Reward☆35Updated 2 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆43Updated 4 months ago
- A collection of visual instruction tuning datasets.☆76Updated last year