jiayuww / SpatialEvalLinks
[NeurIPS'24] SpatialEval: a benchmark to evaluate spatial reasoning abilities of MLLMs and LLMs
☆47Updated 6 months ago
Alternatives and similar repositories for SpatialEval
Users that are interested in SpatialEval are comparing it to the libraries listed below
Sorting:
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆56Updated last year
- ☆69Updated 2 weeks ago
- A paper list for spatial reasoning☆127Updated last month
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆88Updated 9 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆61Updated 4 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆115Updated last week
- ☆41Updated 2 months ago
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆39Updated 3 months ago
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆45Updated 2 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated last month
- ☆45Updated 7 months ago
- Code for paper "Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning"☆44Updated last year
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆86Updated 11 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆27Updated this week
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆91Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆77Updated last year
- [ICLR'25] Reconstructive Visual Instruction Tuning☆101Updated 4 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …