ZJU-REAL / ViewSpatial-BenchLinks
ViewSpatial-Bench:Evaluating Multi-perspective Spatial Localization in Vision-Language Models
☆30Updated last week
Alternatives and similar repositories for ViewSpatial-Bench
Users that are interested in ViewSpatial-Bench are comparing it to the libraries listed below
Sorting:
- SVG benchmark☆22Updated this week
- Code for Paper InftyThink: Breaking the Length Limits of Long-Context Reasoning in Large Language Models☆19Updated this week
- A curated collection of resources, tools, and frameworks for developing GUI Agents.☆51Updated last week
- Mind the Gap: Bridging Thought Leap for Improved CoT Tuning https://arxiv.org/abs/2505.14684☆34Updated last week
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆120Updated last week
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆62Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 4 months ago
- [Arxiv Paper 2504.09130]: VisuoThink: Empowering LVLM Reasoning with Multimodal Tree Search☆18Updated last month
- ☆84Updated 2 months ago
- Data and Code for CVPR 2025 paper "MMVU: Measuring Expert-Level Multi-Discipline Video Understanding"☆68Updated 3 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆45Updated last month
- Code for Let LLMs Break Free from Overthinking via Self-Braking Tuning. https://arxiv.org/abs/2505.14604☆38Updated last week
- TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆42Updated 2 weeks ago
- VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning☆29Updated last month
- ☆100Updated last month
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆63Updated 10 months ago
- ☆24Updated 3 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆37Updated 5 months ago
- Official implementation of GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents☆107Updated last month
- Official implement of MIA-DPO☆58Updated 4 months ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆27Updated 3 weeks ago
- ☆39Updated last month
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆41Updated 2 months ago
- A hot-pluggable tool for visualizing LLaVA's attention.☆19Updated last year
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆105Updated 3 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆72Updated 2 weeks ago
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆84Updated 2 months ago
- MM-PRM: Enhancing Multimodal Mathematical Reasoning with Scalable Step-Level Supervision☆20Updated last week
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆17Updated 3 months ago
- [ACL 2024] Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models. Detect and mitigate object hallucinatio…☆21Updated 4 months ago