apple / ml-space-benchmarkLinks
Code and data for "Does Spatial Cognition Emerge in Frontier Models?"
☆23Updated 4 months ago
Alternatives and similar repositories for ml-space-benchmark
Users that are interested in ml-space-benchmark are comparing it to the libraries listed below
Sorting:
- Code for paper "Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning"☆45Updated last year
- ☆44Updated last year
- ☆71Updated 9 months ago
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆45Updated 2 months ago
- LogiCity@NeurIPS'24, D&B track. A multi-agent inductive learning environment for "abstractions".☆25Updated 2 months ago
- [NeurIPS'24] SpatialEval: a benchmark to evaluate spatial reasoning abilities of MLLMs and LLMs☆47Updated 7 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated 11 months ago
- Egocentric Video Understanding Dataset (EVUD)☆31Updated last year
- ☆33Updated 2 years ago
- This repository is a collection of research papers on World Models.☆38Updated last year
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆29Updated last month
- Source code for the Paper "Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models"☆16Updated last week
- ☆30Updated last year
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆67Updated 3 months ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning", https://arxiv.org/abs/2505.13934☆80Updated 2 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆137Updated last year
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆61Updated 2 years ago
- ☆52Updated last year
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR 2025)☆43Updated 4 months ago
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆93Updated last year
- ☆41Updated 2 months ago
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆104Updated last year
- A paper list of world model☆29Updated 4 months ago
- A Model for Embodied Adaptive Object Detection☆46Updated 3 years ago
- Code and datasets for "What’s “up ” with vision-language models? Investigating their struggle with spatial reasoning".☆58Updated last year
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆61Updated 5 months ago
- [CVPR 2024 Highlight] SPOT: Self-Training with Patch-Order Permutation for Object-Centric Learning with Autoregressive Transformers☆68Updated last year
- Code for Stable Control Representations☆25Updated 5 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated last year
- Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks☆21Updated last month