abliao / RoBridgeLinks
[ICCV2025] RoBridge: A Hierarchical Architecture Bridging Cognition and Execution for General Robotic Manipulation
☆34Updated 5 months ago
Alternatives and similar repositories for RoBridge
Users that are interested in RoBridge are comparing it to the libraries listed below
Sorting:
- ☆100Updated last week
- ☆69Updated last month
- ☆62Updated 10 months ago
- ☆87Updated 7 months ago
- Official PyTorch implementation for ICML 2025 paper: UP-VLA.☆51Updated 6 months ago
- Official Code For VLA-OS.☆132Updated 6 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆205Updated 7 months ago
- Fast-in-Slow: A Dual-System Foundation Model Unifying Fast Manipulation within Slow Reasoning☆135Updated 5 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆79Updated 7 months ago
- ☆42Updated 5 months ago
- ICCV2025☆145Updated 3 weeks ago
- The Official Implementation of RoboMatrix☆104Updated 7 months ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆217Updated 2 weeks ago
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆88Updated 3 months ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆57Updated 3 months ago
- [Embodied-Intelligent-Industrial-Robotics-Survey-2025] Paper List for Embodied Intelligent Industrial Robotics (EIIR)☆23Updated 4 months ago
- ☆62Updated 11 months ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆150Updated 2 months ago
- ☆67Updated 10 months ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆118Updated 5 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆263Updated 3 months ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆98Updated last year
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆121Updated last year
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆130Updated 3 months ago
- ✨✨【NeurIPS 2025】Official implementation of BridgeVLA☆163Updated 3 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆123Updated 4 months ago
- MLA: A Multisensory Language-Action Model for Multimodal Understanding and Forecasting in Robotic Manipulation☆55Updated last month
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆173Updated 2 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆154Updated 8 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆115Updated 4 months ago