vision-x-nyu / thinking-in-spaceLinks
Official repo and evaluation implementation of VSI-Bench
β541Updated last week
Alternatives and similar repositories for thinking-in-space
Users that are interested in thinking-in-space are comparing it to the libraries listed below
Sorting:
- Compose multimodal datasets πΉβ438Updated last month
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"β215Updated 7 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β609Updated last month
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ446Updated 2 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, β¦β156Updated 2 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generationβ362Updated 2 months ago
- β403Updated last year
- [ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D Worldβ281Updated this week
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligenceβ285Updated 3 weeks ago
- A paper list for spatial reasoningβ119Updated last month
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyβ446Updated 5 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ299Updated 9 months ago
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long cβ¦β542Updated last week
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)β221Updated 7 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Surveyβ695Updated 2 weeks ago
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)β173Updated 3 months ago
- WorldVLA: Towards Autoregressive Action World Modelβ248Updated last week
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistantβ303Updated 3 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β281Updated last month
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AIβ605Updated last month
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ450Updated 7 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).β116Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β342Updated 6 months ago
- Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learningβ219Updated 2 weeks ago
- Official repository for VisionZip (CVPR 2025)β319Updated last month
- [CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Languβ¦β297Updated 11 months ago
- Visual Planning: Let's Think Only with Imagesβ253Updated last month
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β350Updated 4 months ago
- A most Frontend Collection and survey of vision-language model papers, and models GitHub repositoryβ259Updated last week
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.β133Updated last month