vision-x-nyu / thinking-in-spaceLinks
Official repo and evaluation implementation of VSI-Bench
β512Updated 3 months ago
Alternatives and similar repositories for thinking-in-space
Users that are interested in thinking-in-space are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"β211Updated 6 months ago
- Compose multimodal datasets πΉβ413Updated 2 weeks ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β577Updated 3 weeks ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generationβ348Updated 2 months ago
- β401Updated last year
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, β¦β133Updated last month
- A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D Worldβ273Updated 6 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ443Updated 2 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyβ445Updated 5 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Surveyβ663Updated this week
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ448Updated 6 months ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"β424Updated 2 weeks ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β331Updated 6 months ago
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β588Updated this week
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ291Updated 9 months ago
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligenceβ235Updated this week
- Official repository for VisionZip (CVPR 2025)β305Updated 3 weeks ago
- Long Context Transfer from Language to Visionβ382Updated 3 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)β215Updated 6 months ago
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β348Updated 4 months ago
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AIβ602Updated last week
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learningβ665Updated 3 weeks ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agentsβ315Updated last year
- π₯π₯π₯ A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).β479Updated 2 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β270Updated 3 weeks ago
- Official implementation of UnifiedReward & UnifiedReward-Thinkβ429Updated last week
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-staβ¦β613Updated last week
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistantβ295Updated 3 months ago
- Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learningβ206Updated 2 weeks ago
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long cβ¦β517Updated last week