zwq2018 / embodied_reasonerLinks
Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks
☆176Updated last month
Alternatives and similar repositories for embodied_reasoner
Users that are interested in embodied_reasoner are comparing it to the libraries listed below
Sorting:
- ☆54Updated 6 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation☆240Updated last month
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.☆323Updated 2 weeks ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆200Updated last week
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆137Updated 10 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆312Updated last month
- WorldVLA: Towards Autoregressive Action World Model☆472Updated 2 weeks ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆113Updated 8 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆114Updated 2 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆191Updated last week
- [CVPR2024] This is the official implement of MP5☆105Updated last year
- ☆64Updated last week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆310Updated 2 months ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆176Updated 3 weeks ago
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks☆180Updated 2 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆297Updated 3 weeks ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.io☆283Updated 5 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆162Updated last month
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆191Updated 4 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆203Updated last month
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆144Updated 6 months ago
- [ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆88Updated last month
- [NeurIPS 2025] VIKI‑R: Coordinating Embodied Multi-Agent Cooperation via Reinforcement Learning☆54Updated last week
- starVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆203Updated last week
- Official Code for EnerVerse-AC: Envisioning EmbodiedEnvironments with Action Condition☆123Updated 3 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆74Updated 5 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆121Updated last year
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆541Updated 4 months ago
- ICCV2025☆135Updated 2 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆307Updated 2 months ago