om-ai-lab / ZoomEyeLinks
[EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration
☆54Updated last month
Alternatives and similar repositories for ZoomEye
Users that are interested in ZoomEye are comparing it to the libraries listed below
Sorting:
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆139Updated last month
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 2 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆66Updated 3 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆216Updated 3 weeks ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆88Updated 3 months ago
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated 11 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆166Updated last year
- Official implement of MIA-DPO☆66Updated 8 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆133Updated 6 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆32Updated 11 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆86Updated last year
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆80Updated last month
- ☆90Updated 8 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆28Updated last month
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆40Updated 6 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆64Updated 2 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆189Updated 3 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆78Updated last month
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆58Updated 10 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆57Updated 10 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆147Updated 10 months ago
- [ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆39Updated 3 months ago
- ☆89Updated 3 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆72Updated 4 months ago
- [ICCV 2025] Dynamic-VLM☆25Updated 9 months ago
- ☆100Updated last year
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆61Updated 7 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆63Updated last year
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆74Updated 2 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 4 months ago