om-ai-lab / ZoomEyeLinks
ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration
☆45Updated 6 months ago
Alternatives and similar repositories for ZoomEye
Users that are interested in ZoomEye are comparing it to the libraries listed below
Sorting:
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆105Updated 2 weeks ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆64Updated last month
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆81Updated 9 months ago
- ☆45Updated 6 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆55Updated 8 months ago
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated 8 months ago
- Official implement of MIA-DPO☆59Updated 5 months ago
- ☆50Updated 5 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆57Updated 2 weeks ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆19Updated 8 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆56Updated 8 months ago
- ☆83Updated 6 months ago
- Pixel-Level Reasoning Model trained with RL☆158Updated 2 weeks ago
- [ICCV 2025] Dynamic-VLM☆21Updated 6 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆51Updated 6 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 3 months ago
- ☆18Updated 3 weeks ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆68Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆82Updated last month
- Implementation for "The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer"☆51Updated 2 weeks ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated 4 months ago
- [ECCV 2024] FlexAttention for Efficient High-Resolution Vision-Language Models☆41Updated 6 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆58Updated 9 months ago
- (ACL 2025) MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆46Updated last month
- ☆86Updated 3 weeks ago
- ☆42Updated 8 months ago
- ☆73Updated last year
- [ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆32Updated 3 weeks ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 4 months ago