The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.
☆335Sep 14, 2025Updated 5 months ago
Alternatives and similar repositories for SpatialBot
Users that are interested in SpatialBot are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆313Dec 14, 2024Updated last year
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆622Oct 29, 2024Updated last year
- A Vision-Language Model for Spatial Affordance Prediction in Robotics☆213Jul 17, 2025Updated 7 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆300Apr 22, 2024Updated last year
- Compose multimodal datasets 🎹☆546Jan 5, 2026Updated last month
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆146Jul 9, 2024Updated last year
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆133Oct 24, 2024Updated last year
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆227Mar 29, 2025Updated 11 months ago
- ☆438Nov 29, 2025Updated 3 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆980Dec 20, 2025Updated 2 months ago
- Official repo and evaluation implementation of VSI-Bench☆673Aug 5, 2025Updated 6 months ago
- [ICLR 2025, Oral] EmbodiedSAM: Online Segment Any 3D Thing in Real Time☆614May 7, 2025Updated 9 months ago
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI☆652Jun 13, 2025Updated 8 months ago
- ☆56Oct 3, 2024Updated last year
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆61Oct 1, 2024Updated last year
- [ICML 2024] LEO: An Embodied Generalist Agent in 3D World☆476Apr 20, 2025Updated 10 months ago
- [ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆373Oct 21, 2025Updated 4 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆657Jun 23, 2025Updated 8 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,317Mar 23, 2025Updated 11 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆341Sep 20, 2024Updated last year
- ☆75Jan 8, 2025Updated last year
- Official implementation of RAM: Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation☆99Dec 30, 2024Updated last year
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆102Aug 22, 2024Updated last year
- A family of lightweight multimodal models.☆1,052Nov 18, 2024Updated last year
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆280Jul 8, 2025Updated 7 months ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆129May 22, 2025Updated 9 months ago
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,625Jan 21, 2026Updated last month
- ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation☆911Feb 20, 2025Updated last year
- A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vi…☆796Dec 17, 2025Updated 2 months ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆168Oct 16, 2024Updated last year
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence☆438Feb 5, 2026Updated 3 weeks ago
- [ICLR 2025 Oral] Official Implementation for "Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Un…☆21Oct 24, 2024Updated last year
- [RSS 2024] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations☆1,262Oct 17, 2025Updated 4 months ago
- [CoRL 2024] Im2Flow2Act: Flow as the Cross-domain Manipulation Interface☆150Oct 17, 2024Updated last year
- [CVPR 2023 Highlight] GAPartNet: Cross-Category Domain-Generalizable Object Perception and Manipulation via Generalizable and Actionable …☆145Oct 29, 2024Updated last year
- Code for RoboFlamingo☆424May 8, 2024Updated last year
- A simulation platform for versatile Embodied AI research and developments.☆1,209Sep 4, 2025Updated 5 months ago
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆204Nov 13, 2024Updated last year
- [CVPR 2026] Multi-SpatialMLLM: Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆170Updated this week