Zhoues / RoboReferLinks
Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"
☆135Updated 3 weeks ago
Alternatives and similar repositories for RoboRefer
Users that are interested in RoboRefer are comparing it to the libraries listed below
Sorting:
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆158Updated this week
- Unified Vision-Language-Action Model☆181Updated last month
- [CVPR 2025]Lift3D Foundation Policy: Lifting 2D Large-Scale Pretrained Models for Robust 3D Robotic Manipulation☆154Updated 2 months ago
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆188Updated last month
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆275Updated 2 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆126Updated this week
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆177Updated 2 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆71Updated 3 months ago
- ICCV2025☆113Updated 3 weeks ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆297Updated 3 months ago
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆41Updated last month
- ☆79Updated 3 months ago
- A curated list of large VLM-based VLA models for robotic manipulation.☆36Updated last week
- 🦾 A Dual-System VLA with System2 Thinking☆92Updated last week
- ☆55Updated 6 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆29Updated last week
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆122Updated 3 months ago
- WorldVLA: Towards Autoregressive Action World Model☆358Updated last month
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆225Updated last month
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆132Updated 8 months ago
- ✨✨Official implementation of BridgeVLA☆118Updated last month
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆192Updated last week
- OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆59Updated last month
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆137Updated 4 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆130Updated 10 months ago
- ☆49Updated 10 months ago
- [CVPR 2025] Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning☆43Updated 4 months ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆110Updated 3 months ago
- ☆54Updated 7 months ago
- Official Code For VLA-OS.☆94Updated 2 months ago