om-ai-lab / ZoomEyeLinks
ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration
☆37Updated 5 months ago
Alternatives and similar repositories for ZoomEye
Users that are interested in ZoomEye are comparing it to the libraries listed below
Sorting:
- ☆84Updated 2 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 2 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆78Updated 9 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆60Updated last week
- ☆44Updated 5 months ago
- Pixel-Level Reasoning Model trained with RL☆140Updated last week
- ☆78Updated 5 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆18Updated 8 months ago
- (ACL 2025) MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆45Updated 2 weeks ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆67Updated 2 weeks ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆118Updated 2 weeks ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 6 months ago
- Official implement of MIA-DPO☆58Updated 5 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆64Updated 9 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆76Updated 2 weeks ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆54Updated 7 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆60Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆51Updated 5 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆39Updated 3 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆104Updated 3 weeks ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated 3 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆28Updated 8 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆91Updated last week
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆50Updated 6 months ago
- ☆48Updated 2 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆54Updated this week
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 5 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆32Updated 2 months ago
- Official implementation of Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning☆16Updated 7 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆119Updated last month