vision-x-nyu / thinking-in-spaceLinks
Official repo and evaluation implementation of VSI-Bench
β645Updated 4 months ago
Alternatives and similar repositories for thinking-in-space
Users that are interested in thinking-in-space are comparing it to the libraries listed below
Sorting:
- Compose multimodal datasets πΉβ516Updated 4 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"β286Updated 11 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β759Updated 2 months ago
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligenceβ391Updated 5 months ago
- A paper list for spatial reasoningβ471Updated last week
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, β¦β194Updated 7 months ago
- π This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.β347Updated last week
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ468Updated 7 months ago
- [ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D Worldβ354Updated last month
- Cambrian-S: Towards Spatial Supersensing in Videoβ407Updated 3 weeks ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generationβ411Updated 7 months ago
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistantβ351Updated 8 months ago
- RynnVLA-002: A Unified Vision-Language-Action and World Modelβ679Updated 2 weeks ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoningβ98Updated 4 months ago
- β467Updated last month
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ331Updated last year
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β319Updated 2 months ago
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AIβ642Updated 5 months ago
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)β669Updated 2 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)β273Updated last year
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.β182Updated 6 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policyβ296Updated 3 weeks ago
- β107Updated 4 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β406Updated 11 months ago
- β112Updated this week
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyβ461Updated 10 months ago
- A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Aβ¦β843Updated this week
- [NeurIPS 2025] Official Repo of Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaborationβ96Updated this week
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)β199Updated last month
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuningβ238Updated last month