USC-GVL / PhysBenchLinks
[ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding>
☆66Updated 2 months ago
Alternatives and similar repositories for PhysBench
Users that are interested in PhysBench are comparing it to the libraries listed below
Sorting:
- Source codes for the paper "MindJourney: Test-Time Scaling with World Models for Spatial Reasoning"☆69Updated last week
- ☆82Updated last week
- Main repo for SimWorld simulator.☆57Updated last month
- ☆41Updated last month
- ☆69Updated 2 weeks ago
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆140Updated 2 months ago
- ☆21Updated 9 months ago
- Program synthesis for 3D spatial reasoning☆44Updated last month
- ☆133Updated 7 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆162Updated 3 months ago
- ☆77Updated 11 months ago
- [CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs☆44Updated last year
- Official implementation of "Self-Improving Video Generation"☆66Updated 3 months ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆39Updated 7 months ago
- ☆76Updated 2 months ago
- Code for Stable Control Representations☆25Updated 4 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆115Updated this week
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆50Updated 3 months ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆77Updated 2 months ago
- OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆58Updated 2 weeks ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆109Updated this week
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆133Updated last month
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆81Updated 2 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆52Updated 5 months ago
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆137Updated 2 months ago
- ☆42Updated last year
- A paper list that includes world models or generative video models for embodied agents.☆24Updated 6 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆61Updated 4 months ago
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆52Updated this week
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆130Updated 9 months ago