BAAI-DCAI / SpatialBotLinks
The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.
☆317Updated 2 months ago
Alternatives and similar repositories for SpatialBot
Users that are interested in SpatialBot are comparing it to the libraries listed below
Sorting:
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆257Updated last week
- WorldVLA: Towards Autoregressive Action World Model☆539Updated last month
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆172Updated 3 weeks ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆193Updated 3 weeks ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆593Updated last year
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆465Updated 6 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆318Updated last month
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆203Updated 4 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆224Updated 2 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆570Updated 4 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation☆250Updated 3 weeks ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆433Updated last week
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆281Updated 11 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆286Updated last year
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆374Updated 2 weeks ago
- Unified Vision-Language-Action Model☆226Updated last month
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆139Updated 10 months ago
- ☆409Updated 9 months ago
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆167Updated 4 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆194Updated 5 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆102Updated 2 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.☆250Updated last week
- ICCV2025☆142Updated this week
- ☆313Updated this week
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆212Updated last week
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆323Updated 4 months ago
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆143Updated last year
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆179Updated last month
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆134Updated last year
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆400Updated 9 months ago