FlagOpen / RoboBrain-X0Links
☆88Updated 3 weeks ago
Alternatives and similar repositories for RoboBrain-X0
Users that are interested in RoboBrain-X0 are comparing it to the libraries listed below
Sorting:
- ☆59Updated 7 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆115Updated 2 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation☆250Updated 3 weeks ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆76Updated 6 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆179Updated last month
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks☆185Updated 3 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆139Updated 10 months ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆135Updated 3 weeks ago
- Unified Vision-Language-Action Model☆226Updated last month
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆81Updated last month
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆117Updated 9 months ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆51Updated 2 months ago
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆66Updated last month
- ICCV2025☆142Updated this week
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆85Updated 5 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆172Updated 3 weeks ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆212Updated last week
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆177Updated 2 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆194Updated 5 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆257Updated last week
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆151Updated 7 months ago
- [NeurIPS 2025] VIKI‑R: Coordinating Embodied Multi-Agent Cooperation via Reinforcement Learning☆56Updated 3 weeks ago
- Official repository for "RLVR-World: Training World Models with Reinforcement Learning" (NeurIPS 2025), https://arxiv.org/abs/2505.13934☆135Updated 3 weeks ago
- Official implementation for BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation☆90Updated 4 months ago
- Official Implementation of "Align-Then-stEer: Adapting the Vision-Language Action Models through Unified Latent Guidance".☆33Updated last month
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆400Updated 9 months ago
- WorldVLA: Towards Autoregressive Action World Model☆539Updated last month
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.☆341Updated last month
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆295Updated 6 months ago
- Running VLA at 30Hz frame rate and 480Hz trajectory frequency☆242Updated last week